Abstract
In this review, we synthesize the existing literature demonstrating the dynamic interplay between conceptual knowledge and visual perceptual processing. We consider two theoretical frameworks that demonstrate interactions between processes and brain areas traditionally considered perceptual or conceptual. Specifically, we discuss categorical perception, in which visual objects are represented according to category membership, and highlight studies showing that category knowledge can penetrate early stages of visual analysis. We next discuss the embodied account of conceptual knowledge, which holds that concepts are instantiated in the same neural regions required for specific types of perception and action, and discuss the limitations of this framework. We additionally consider studies showing that gaining abstract semantic knowledge about objects and faces leads to behavioral and electrophysiological changes that are indicative of more efficient stimulus processing. Finally, we consider the role that perceiver goals and motivation may play in shaping the interaction between conceptual and perceptual processing. We hope to demonstrate how pervasive such interactions between motivation, conceptual knowledge, and perceptual processing are in our understanding of the visual environment, and to demonstrate the need for future research aimed at understanding how such interactions arise in the brain.
Similar content being viewed by others
The penetrability of visual perception to influences from higher-order cognition has been the subject of great controversy in psychology over the past century. Proponents of the “New Look” movement, spanning the 1940s and 1950s, argued that motivated states influence perceptual decisions about the world. For example, it was shown that poor children overestimate the size of coins (Bruner & Goodman, 1947) and that hungry people overrate the brightness of images of food (Gilchrist & Nesberg, 1952). More recent work has shown that higher-order cognitive factors, such as learned stimulus prediction value (O’Brien & Raymond, 2012), cognitive reappraisal (Blechert, Sheppes, Di Tella, Williams, & Gross, 2012), and motivation (Radel & Clément-Guillotin, 2012), can exert top-down influences on the early stages of visual perception. Moreover, a large body of research demonstrating perceptual learning has shown that perceptual systems can be trained to act more efficiently in accordance with the task demands, and that sensitivity to perceptual dimensions can be strategically tuned (for a review, see Goldstone, Landy, & Brunel, 2011).
These findings are not without controversy. Other researchers have argued that perceptual processes are highly modular, impervious to influences from cognitive states (Pylyshyn, 1999; Riesenhuber & Poggio, 2000). Proponents of this view have attributed instances of cognition influencing perception to either postperceptual decision processes or preperceptual attention-allocation processes (Pylyshyn, 1999). Pylyshyn has specifically argued that early vision is impenetrable. However, theories of cognitive impenetrability have difficulty accounting for several findings, such as demonstrations that prior knowledge influences the perception of color (Levin & Banaji, 2006; Macpherson, 2012), which cannot be easily attributed to the influences of attention.
The goal of this review is to synthesize the existing literature demonstrating the dynamic interplay between conceptual knowledge and visual perceptual processing. The semantic-memory, category-learning, and visual-discrimination literatures have remained relatively dissociated within the field of psychology; however, object identification and categorization both involve comparing an incoming visual representation with some representation of stored knowledge. Thus, a full characterization of visual object understanding will necessitate research into the nature of object representations, knowledge representations, and the processes that operate on these representations. A comprehensive account of the cognitive penetration of perceptual processes is beyond the scope of a psychological journal and has been suitably addressed in other venues (Macpherson, 2012; Siegel, 2012; Stokes, 2012). Instead, we will consider the bodies of literature that we believe best demonstrate the interactions between processes and brain areas traditionally considered perceptual or conceptual, and highlight the ways in which these different bodies of research can be integrated to gain a fuller understanding of high-level vision. The fundamental questions that we seek to address are: How are objects represented by the visual system? Where in the brain is object-related conceptual knowledge represented, and how does the activation of conceptual information about an object unfold? And what are the consequences of accessing conceptual knowledge for perception and perceptual decision-making about the visual world? In addressing these questions, we will first consider the two major theoretical frameworks in which interactions between perceptual and conceptual processing systems have been studied: categorical perception and embodied cognition. We will then discuss findings that are not easily assimilated within these frameworks.
Theoretical foundations: Categorical perception
One of the most robust sources of evidence for conceptual–perceptual interactions comes from research on the phenomenon of categorical perception. Categorical perception refers to our tendency to perceive the environment in terms of the categories we have formed, with continuous perceptual changes being perceived as a series of discrete qualitative changes separated by category boundaries (Harnard, 1987). Thus, category knowledge is used to abstract over perceptual differences between objects from the same class, and to highlight differences between objects from different classes (see Fig. 1).
One example of categorical perception effects at work is the way in which we perceive a rainbow. Although a rainbow is composed of a continuous range of light frequencies, changing smoothly from top to bottom, we perceive seven distinct bands of color. Even in tightly controlled laboratory experiments that use psychophysically balanced color spaces—thus controlling for low-level nonlinearities between colors—color perception is categorical. This is due to our higher-order conceptual representations (in this case, color category labels) shaping the output of color perception. The categorical perception of color is just one example of this phenomenon, since categorical perception has been observed for various other natural stimuli, including faces and trained novel objects (Goldstone, 1994; Goldstone, Lippa, & Shiffrin, 2001; Goldstone, Steyvers, & Rogosky, 2003; Levin & Beale, 2000; Livingston, Andrews, & Harnad, 1998; Lupyan, Thompson-Schill, & Swingley, 2010; Newell & Bülthoff, 2002; Sigala, Gabbiani, & Logothetis, 2002).
Why would we want to see our environment categorically? Given the complexity of the visual environment and the variation in visual features for objects from the same category, categorical perception is a useful information-compression mechanism. Categorical perception allows us to carve up the world into the categories that are relevant to our behavior, thus allowing us to more efficiently process the visual features that are relevant to these categories. For example, when presented with a poisonous snake, it is more useful to quickly process snake-relevant features for fast categorization than to attend to the visual features that discriminate this snake from other snakes. Note that we are not arguing that individuals are incapable of perceiving within-category differences; indeed, it has been shown that individuals are sensitive to within-category phonetic differences, which constitutes one of the strongest cases of categorical perception (McMurray, Aslin, Tanenhaus, Spivey, & Subik, 2008). Rather, we are arguing that such within-category differences are attenuated relative to those distinguishing exemplars from different categories.
Conceptual influences on perceptual decision-making can operate by modifying perception, attention, or decision processes, and recent research has been aimed at addressing when during visual stimulus processing the effects of categorical perception emerge. Behavioral studies have suggested that categorical perception modifies the discriminability of category-relevant features for faces (Goldstone et al., 2001) and for oriented lines (Notman, Sowden, & Özgen, 2005) through a process of perceptual learning. These findings thus suggest an early, perceptual locus for categorical perception effects, in which category learning modifies perceptual representations for learned objects. Electrophysiological research has supported these findings, with category differences being reflected in early markers of preattentive visual processing originating from the visual cortex, including the N1 and P1 components (Holmes, Franklin, Clifford, & Davies, 2009), as well as the vMMN component (Clifford, Holmes, Davies, & Franklin, 2010; Mo, Xu, Kay, & Tan, 2011).
Is categorical perception verbally mediated?
Language is often used to convey categorical knowledge, and ample research has shown that language and conceptual knowledge interact (Casasola, 2005; Gentner & Goldin-Meadow, 2003; Gumperz & Levinson, 1996; Levinson, 1997; Lupyan, Rakison, & McClelland, 2007; Snedeker & Gleitman, 2004; Spelke, 2003; Waxman & Markow, 1995; Yoshida & Smith, 2005). Indeed, recent findings have demonstrated that even noninformative, redundant labels can influence visual processing in striking ways (Lupyan & Spivey, 2010; Lupyan & Thompson-Schill, 2012). Lupyan and Thompson-Schill found that performance on an orientation discrimination task was facilitated when the image was preceded by the auditory presentation of a verbal label, but not by a sound that was equally associated with the object. For instance, participants more quickly and accurately indicated which side of a display contained an upright cow following the presentation of the word “cow,” but not following the presentation of an auditory “moo.” The authors also found that the priming effect for the presentation of labels was greater for objects that were rated as typical, and thus presumably has a stronger relationship with a category conceptual representation (Lupyan & Spivey, 2010; Lupyan & Thompson-Schill, 2012). These findings suggest that labels may have a special status in their ability to influence visual processing. Because category membership is typically demarcated by the presence of a label, it has been difficult to dissociate the influences of categorical relatedness from those of having shared verbal labels. An important question is, how critical are verbal labels in modulating the influence of conceptual knowledge on perception?
Several studies have suggested that object-to-label mapping may be necessary for categorical perception to occur. Research has shown that occupying verbal working memory interferes with the categorical perception of color patches (Roberson & Davidoff, 2000), and that this interference effect is consistent across languages (Winawer et al., 2007). Furthermore, categorical perception for faces only emerges when those faces are familiar or associated with names (Angeli, Davidoff, & Valentine, 2008; Kikutani, Roberson, & Hanley, 2008). It has also been shown that the categorical perception of color is strongest if the stimuli are presented in the right visual field, and thus directed to the left hemisphere language areas (Gilbert, Regier, Kay, & Ivry, 2006; Roberson, Pak, & Hanley, 2008), and that these effects are contingent upon the formation of color labels in childhood (Franklin, Drivonikou, Bevis, et al., 2008; Franklin, Drivonikou, Clifford, et al., 2008) or in adulthood (Zhou et al., 2010).
The preceding findings raise an interesting paradox: How can categorical perception be both robust, in that it alters early perceptual processing (Holmes et al., 2009) and warps perceptual representations (Goldstone, 1994; Goldstone et al., 2001), and fragile, in that it can be mitigated by manipulations of verbal working memory (Roberson & Davidoff, 2000) and can appear after small amounts of training (Zhou et al., 2010)? Lupyan (2012) proposed the label-feedback hypothesis as a potential solution to this problem. He argues that the distinction between verbal and nonverbal processes should be replaced by a system in which language is viewed as a modulator of a distributed and interactive process. According to this hypothesis, category-diagnostic perceptual features may automatically trigger the activation of labels, which then feed back to dynamically amplify category-diagnostic features that were activated by the label (Lupyan, 2012). Such a mechanism would enable perceptual representations to be modulated quickly and transiently and would presumably be up- or down-regulated by linguistic manipulations that modified the availability of labels, such as verbal working memory load.
It should be noted that a recent study has demonstrated that categorical perception effects could emerge for nonlinguistic categories, and interestingly, that these effects were stronger in the left hemisphere, as is typical for categories with verbal labels (Holmes & Wolff, 2012). These findings suggest that the frequently found left lateralization of categorical perception may not be due to the recruitment of language-processing areas per se, but may rather be due to the propensity of the left hemisphere for category-level perceptual discriminations (Marsolek, 1999; Marsolek & Burgund, 2008). Alternatively, participants may automatically label categories during training, in the absence of explicit instructions, and these implicitly formed labels may be recruited during subsequent visual processing. The latter possibility seems more consistent with work demonstrating that verbal interference attenuates categorical perception effects; however, future work using verbal interference paradigms during training could help distinguish between these two possibilities. To gain further traction in understanding the contribution of verbal labels to categorical perception, researchers could use noninvasive brain stimulation techniques, such as transcranial magnetic stimulation (TMS), to create a temporary lesion in language areas such as Wernicke’s and Broca’s areas. If categorical perception relies on the recruitment of language-processing resources, one would expect that decreasing activation in an area like Wernicke’s area would diminish categorical perception.
How is categorical perception instantiated in the brain?
As we discussed in the preceding section, behavioral research has suggested that categorical perception can operate by modifying perceptual representations, such that dimensions relevant to category membership are sensitized (Goldstone et al., 2001). If this is true, one would expect to see category-specific perceptual enhancements within the inferior temporal (IT) cortex, in which object processing takes place (Grill-Spector, 2003; Mishkin, Ungerleider, & Macko, 1983). Interestingly, it has been shown that category learning for shapes can cause monkey IT neurons to become more sensitive to variations along a category-relevant dimension than to variations along a category-irrelevant dimension (De Baene, Ons, Wagemans, & Vogels, 2008). In this study, monkeys learned to categorize novel objects into four sets of 16 objects, each based on either the curvature or the aspect ratio of the shapes. Importantly, the authors controlled for the effects of pretraining stimulus selectivity by counterbalancing the relevant dimension for categorization across animals, and by recording neuronal sensitivity for the objects before and after learning. The effect of category learning was small, however, with only 55 % of the recorded neurons demonstrating the effect.
In humans, fMRI adaptation paradigms have been used to probe whether category learning can similarly alter object representations in the visual and IT cortex. FMRI adaptation refers to the reduction in the fMRI BOLD response that is seen when a population of neurons is stimulated twice, such as when two identical objects are presented in succession (Grill-Spector, Henson, & Martin, 2006). The more similar two visual stimuli are, the more the BOLD response for the second stimulus is reduced (Grill-Spector & Malach, 2001), thus making fMRI adaptation a useful tool for probing the dimensions across which neural populations gauge similarity. Using this technique, Folstein, Palmeri, and Gauthier (2013) found that category learning increased the discriminability of neural representations along category-relevant dimensions in the ventral visual-processing stream. Participants learned to categorize morphed car stimuli into two categories on the basis of their resemblance to two parent cars (see Fig. 2). After category learning, fMRI adaptation was assessed during a location-matching task, for which category information was irrelevant. Adaptation was reduced along the object dimensions relevant to categorization within the object-selective cortex of the mid-fusiform gyrus, suggesting that neurons in this area had become more sensitive to perceptual variations relevant to the learned categories.
Other studies that have failed to show similar enhancements of relevant dimensions in visual cortex after category learning (Gillebert, Op de Beeck, Panis, & Wagemans, 2008; Jiang et al., 2007; van der Linden, van Turennout, & Indefrey, 2010) either did not test for behavioral influences of category learning (Gillebert et al., 2008; van der Linden et al., 2010) or failed to find such behavioral effects (Jiang et al., 2007), leaving open the possibility that their training manipulations were not sufficient to engender the changes to object representations that are necessary for categorical perception to occur. This may be partially attributable to the stimuli used by these studies, which were created using a blended rather than a factorial morph-space, as had been used by Folstein et al. (2013). Essentially, the use of a factorial morph-space allows participants to make category distinctions on the basis of one dimension (although note that this dimension may be complex, as is the case with morphed car or face stimuli), whereas category decisions are made on the basis of four dimensions with a blended morph-space (for an illustration of the distinction between blended and factorial morph-spaces, refer to Folstein, Gauthier, & Palmeri, 2012). Thus, category learning may only change object representations when one perceptual dimension can be used to infer category membership.
Some evidence has suggested that category learning can lead to long-term, structural changes to the perceptual system (Kwok et al., 2011). In Kwok et al.’s study, participants learned subcategory shades of green and blue and associated these subcategories with meaningless names over a 2-h training period. This training resulted in an increase in the volume of gray matter in area V2/V3 of the left visual cortex, thus suggesting that the frequently found left-lateralized categorical perception of color may be due to structural changes in early visual cortex. Such findings seem to contradict the flexible nature of categorical perception, which can be up- or down-regulated by linguistic manipulations. It is important to note that the authors did not include a control group with no category training, and thus did not control for factors that could have artificially inflated the structural differences seen between the two scanning sessions, such as scanner drift. Additionally, a structural analysis of visual cortex was only performed once, immediately after training, so it is unclear whether the observed effects truly reflected long-term structural changes, or were more transient in nature. Some researchers have recently called into question the validity of studies showing that structural changes in gray matter density can be attained in adults using training paradigms (Thomas & Baker, 2013), and thus the results of Kwok et al. should be interpreted with caution. This not to say that discrimination training with colors cannot lead to long-term structural changes in early visual cortex—indeed, developmental research suggests that structural plasticity during childhood shapes our visual and auditory perception of the environment (Wiesel & Hubel, 1965)—but rather that much more research is needed before we can accept findings demonstrating training-induced structural plasticity in adults.
Categorical perception: Summary and conclusions
Categorical perception is a pervasive demonstration of the dynamic interplay between conceptual category knowledge and perceptual processes. Behavioral, electrophysiological, and neuroimaging work has suggested that category knowledge can penetrate the early stages of visual analysis and engender changes to object representations. Additional work has suggested that these effects may be highly reliant on the recruitment of left hemisphere language-processing resources, which allow labels to modify perceptual representations according to category distinctions automatically and online, giving rise to categorical perception effects that are both flexible and robust.
Neuroimaging work has shown that categorical perception may arise through the restructuring of perceptual representations within the ventral visual-processing stream, but only when stimulus classes are used that allow participants to attend to one dimension for categorical distinctions (Folstein et al., 2012). Given the paucity of studies addressing the neural underpinnings of categorical perception for visual stimuli, it will be important for additional work to delineate the types and amounts of training that are sufficient for category knowledge to alter perceptual representations, and the constraints that variations in stimulus complexity may place on category learning.
Theoretical foundations: Embodied cognition
When considering how conceptual knowledge should affect the neural processing of visual stimuli, it is important to contemplate the predictions made by current theories of semantic representation in the brain. One of the most successful neural-based accounts of semantic memory is the embodied account, which holds that concepts are instantiated in the same neural regions required for specific types of perception and action (Barsalou, 1999; Barsalou, Simmons, Barbey, & Wilson, 2003; Goldberg, Perfetti, & Schneider, 2006a, b; Martin, 2007). This hypothesis has been supported by behavioral work showing that perceptual and conceptual representations utilize shared resources (for a review, see Barsalou, 2008).
Most of the research supporting the embodied-cognition hypothesis has come from studies showing that motion perception and motion-language comprehension interact. Response times for semantic judgments about sentences containing motion-related words are influenced by the motion used to make a response (i.e., whether it is congruent or incongruent) and by simultaneously viewing a rotating cross in a motion-congruent or -incongruent direction (Zwaan & Taylor, 2006). However, degree of temporal overlap and integrability determine whether concurrent motion perception interferes with or facilitates motion-language comprehension. When visually salient and attention-grabbing stimuli are used, motion perception can impair the comprehension of verbs implying motion in a congruent direction (Kaschak, Madden, & Therriault, 2005), whereas the inverse is true for stimuli that are nonsalient and easily integrated with the context of the sentence (Zwaan & Taylor, 2006). The dissociable influences of concurrent motion perception on the comprehension of motion-related language depending on the visual saliency of the stimuli has been supported by a study showing that lexical decisions for motion-related words (regardless of congruence) were less accurate when participants simultaneously viewed moving dots presented above the threshold for conscious awareness; however, participants were slower to respond to motion-incongruent, relative to congruent or non-motion-related, words when the dots were presented just below threshold (Meteyard, Zokaei, Bahrami, & Vigliocco, 2008).
Additional studies have shown that the perceptual processing of motion-related stimuli can be infiltrated by the conceptual processing of motion-related words. For example, participants more quickly identified shapes presented along the vertical axis of a screen when the shapes were preceded by verbs implying horizontal, relative to vertical, motion (Richardson, 2003). The author also found that participants more quickly responded to pictures in a vertical orientation that they had previously seen paired with a sentence associated with vertical (e.g., “the girl hoped for a horse”), relative to a horizontal (e.g., “the girl rushes to school”), context sentence, suggesting that abstract reference to motion can influence perceptual processing. Furthermore, it has been shown that the early stages of motion perception are penetrable to the semantic processing of motion; perceptual sensitivity (measured by d') for motion detection is impaired when participants simultaneously process motion-related words in an incongruent direction (Meteyard, Bahrami, & Vigliocco, 2007).
How is embodied cognition instantiated in the brain?
The strongest support for the embodied account has come from neuroimaging research showing that sensory and motor brain areas are recruited when performing semantic tasks that involve a sensory or motor modality (Barsalou, 2008). For example, the retrieval of tactile information is associated with the activation of somatosensory, motor, and premotor brain areas (Goldberg, Perfetti, & Schneider, 2006a, b; Oliver, Geiger, Lewandowski, & Thompson-Schill, 2009). Additionally, the retrieval of color knowledge is associated with the activation of brain areas involved in color perception, namely the left fusiform gyrus (Hsu, Frankland, & Thompson-Schill, 2012; Simmons et al., 2007) and the left lingual gyrus (Hsu et al., 2012). TMS research has further implicated sensory-motor brain areas as playing a causal role in conceptual processing by showing that stimulation of motor cortex can facilitate the processing of motion-related words (Pulvermüller, Hauk, & Nikulin, 2005; Willems, Labruna, D’Esposito, Ivry, & Casasanto, 2011). When repetitive transcranial magnetic stimulation is used to induce a temporary lesion to primary motor cortex (M1), interference is seen for the processing of motion related words (Lo Gerfo et al., 2008), and these effects are specific to hand-related words when rTMS is directed to the hand portion of M1 (Repetto, Colombo, Cipresso, & Riva, 2013).
The above-mentioned work suggests that perceptually grounded conceptual knowledge is recruited automatically during stimulus processing, however it remains unclear whether this knowledge constitutes part of the object representation itself, or instead reflects down-stream activation of embodied semantic representations once identification has taken place. It has been shown that tools elicit greater activity in motor and premotor cortex than nonmanipulable objects during passive viewing, suggesting that these embodied representations may be utilized in the absence of semantic processing demands (Chao, Haxby, & Martin, 1999). Furthermore, Kiefer, Sim, Herrnberger, Grothe, and Hoenig (2008) found that the posterior superior temporal gyrus (pSTG), and middle temporal gyrus (MTG), both of which were activated during the perception of real sounds, were also activated quickly (within 150 ms) and automatically by the visual presentation of object names for which acoustic features are diagnostic (e.g., “telephone”), and that this activation increased linearly with the relevance of auditory features to the object concept. The authors argued that the early latency of auditory-related activity precludes an explanation based on postperceptual imagery, and suggests that the activations in auditory cortex are partly constituent of the object concepts themselves (however, for evidence that the pSTG and MTG may play a role in object naming, see Acheson, Hamidi, Binder, & Postle, 2011).
Recent findings suggest that the action-related features of tools, stored in motor and premotor areas of the brain, may facilitate the restructuring of perceptual representations within the medial fusiform gyrus (Mahon et al., 2007), a part of ventral stream that has been implicated in the processing of manipulable objects (Beauchamp, Lee, Haxby, & Martin, 2002, 2003; Chao et al., 1999; Chao, Weisberg, & Martin, 2002; Noppeney, Price, Penny, & Friston, 2006). Using fMRI repetition suppression, the authors found that neurons in the medial fusiform gyrus exhibited neural specificity for tools, but not for arbitrarily manipulable objects (such as books or envelopes), or nonmanipulable large objects (Mahon et al., 2007). Stimulus-specific repetition suppression in dorsal motor areas was similarly restricted to tools, and was functionally related to the neural specificity for tools in the ventral stream. Similar effects have been found for novel objects after extensive training to use those objects for tool-like tasks (Weisberg, Van Turennout, & Martin, 2007). These results beg the question, what types of experience are sufficient and/or necessary for an object to be represented as a tool in the ventral stream? A compelling area for future research would be to use fMRI repetition suppression or multivoxel pattern analysis (MVPA) to examine whether, and if so how, different types of learning and experience engender stimulus-specific changes to perceptual representations within visual cortex. The results of Mahon et al. (2007) suggest that associating function-related motor experience with novel objects increases neural specificity for those objects within the ventral visual stream, however it remains unclear to what extent direct motor experience is necessary for such changes in neural tuning to arise.
Is direct sensory/motor experience necessary for embodied conceptual representations to emerge? One study tested this by having participants learn associations between novel objects and words describing features of these stimuli, such as being “loud” (James & Gauthier, 2003). The results of a subsequent fMRI study showed that a portion of auditory cortex—the superior temporal gyrus—was activated for objects associated with sound descriptors, and that a region near motion sensitive cortex MT/v5—the posterior superior temporal sulcus—was activated for objects associated with motion descriptors. These findings suggest that knowledge about an object’s sensory features derived through abstract semantic learning can engage similar neural mechanisms as sensory knowledge acquired through direct experience. In other words, being told that an object is loud and hearing a loud object may influence subsequent identification by engaging similar neural processing regions.
Arguments against embodied cognition
The crux of the embodied account of conceptual knowledge holds that concepts are embodied or instantiated in the same neural regions required for specific types of perception and action. This idea has been largely supported by behavioral and neuroimaging findings demonstrating that sensory and motor features of concepts are activated quickly and automatically, and that motor-relevant properties for objects can shape perceptual representations formed in ventral–visual cortex. These findings, however, do not provide unequivocal support that embodied representations are necessary for conceptual understanding. Opponents of the embodied-cognition hypothesis have argued that the behavioral influences of perceptual processing on conceptual processing that have been cited in support of embodiment (Kaschak et al., 2005; Pecher, Zeelenberg, & Barsalou, 2003, 2004; van Dantzig, Pecher, Zeelenberg, & Barsalou, 2008; Zwaan & Taylor, 2006) may be occurring at the level of response selection, rather than playing a necessary role in conceptual understanding (Mahon & Caramazza, 2008). Similarly, the neuroimaging findings that have been cited in support of embodiment are also consistent with theories that allow for spreading activation from disembodied conceptual representations to the sensory and motor systems that guide behavior (Chatterjee, 2011; Mahon & Caramazza, 2008). Sensory-motor areas may be activated because they are necessary for conceptual processing, or alternatively activation in sensory motor areas may reflect a spread of activation from amodal areas (Mahon & Caramazza, 2008). This interpretation of the relevant neuroimaging work is consistent with a recent study showing that activity in an amodal association area (left IFG) correlated with behavioral performance on a semantic property verification task (Smith et al., 2012). TMS research has offered the most compelling source of evidence of evidence in favor of the embodied cognition hypothesis, but even these findings are subject to criticism since TMS effects tend to be distributed away from the stimulated cite via cerebrospinal fluid (Wagner, Valero-Cabre, & Pascual-Leone, 2007), potentially influencing brain activity in functionally connected regions.
Moreover, neuropsychological findings are largely inconsistent with embodied accounts. For instance, patients with focal lesions causing apraxia, a deficit in using objects, retain conceptual knowledge of object names and how objects should be used (Johnson-Frey, 2004; Mahon & Caramazza, 2005; Negri et al., 2007). Many other lesion studies that appear to support embodied accounts suffer from a lack of anatomical specificity needed to support a strong versions of embodiment. For example, in one study it was shown that patients with damage to frontal motor areas demonstrated a lexical decision impairment for action related words (Neininger & Pulvermüller, 2003), however most of these patients also had extensive damage to the parietal and temporal cortices, and thus their semantic deficit could not be attributed to the frontal damage per se. Furthermore, a recent study of a large cohort of individuals with left hemisphere lesions to sensorimotor areas found no relationship between the site of the cortical lesion, and conceptual processing of motor verbs (Arévalo, Baldo, & Dronkers, 2012). Additional work using such methods could shed light on the roles of other sensory processing areas in object cognition and perception.
Embodied cognition: Summary and conclusions
The current body of literature does not provide strong evidence that embodiment is necessary for semantic understanding. Moving forward, it is probably more useful to ask to what degree concepts are embodied (Hauk & Tschentscher, 2013), under what circumstances they are embodied, and how this varies between individuals. It has been shown that the degree to which concepts are embodied varies from person to person depending on object-related experience (Beilock, Lyons, Mattarella-Micke, Nusbaum, & Small, 2008; Calvo-Merino, Glaser, Grèzes, Passingham, & Haggard, 2005; Hoenig et al., 2011). Thus, future work should be aimed at understanding the mechanisms through which individual differences in embodied cognition emerge, and how these embodied representations contribute to cognition. Additionally, conceptual processing may engage embodied representations to different extents depending on the task at hand. One possibility is that perceptual and motor areas mediate visual imagery, which may be needed to verify complex perceptual properties that are not immediately accessible to the observer, but not simple perceptual characteristics that are strongly associated with a concept (Thompson-Schill, 2003). Finally, current neuroimaging work has been aimed at understanding the dynamic interactions between distributed brain networks that give rise to cognition and perception. FMRI methods that are aimed at assessing functional and effective connectivity between brain areas will likely contribute to our understanding of how sensory-motor areas interact with amodal association areas to give rise to semantic understanding (Valdés-Sosa et al., 2011).
Semantic knowledge affects the visual processing of objects and faces
In the preceding sections we discussed the two major frameworks in which interactions between conceptual and perceptual processing systems have been studied. However these frameworks offer little insight into the problem of how conceptual knowledge that is unrelated to the sensory properties of the stimulus affects subsequent processing. For example, we frequently acquire emotionally laden information about the people around us—that they are silly, curious, lazy, or extraverted – and this information can bias perceptual processing (Anderson, Siegel, Bliss-Moreau, & Barrett, 2011).
In the laboratory this issue is studied by training participants to associate semantic features with previously unfamiliar stimuli. For example, it has been shown that associating meaningful verbal labels with perceptually novel stimuli improves visual search efficiency (Lupyan & Spivey, 2008), however only when participants adopt a passive search strategy (Smilek, Dixon, & Merikle, 2006). Three additional behavioral studies have used training paradigms, during which participants learned to associate in-depth semantic knowledge with novel visual objects, to identify conceptual influences on visual processing that occur independently of visual object features and familiarity. Findings from this literature have shown that conceptual knowledge can facilitate the recognition of novel objects and attenuate the viewpoint dependency of object recognition (Collins & Curby, 2013; Curby, Hayward, & Gauthier, 2004; Gauthier, James, Curby, & Tarr, 2003). In these studies, participants learned to associate clusters of three semantic features with each of four novel objects (see Fig. 3). Later, these stimuli appeared in a perceptual-matching task in which participants indicated whether two sequentially presented stimuli were the same or different. The first trained object was always presented in its canonical orientation, whereas the second trained object could be presented at one of four orientations (0°, 30°, 60°, or 120°). Gauthier et al. (2003) found that the discrimination of novel objects was facilitated across all viewpoints when these objects were associated with a cluster of distinctive (nonoverlapping) semantic features. Furthermore, Curby and colleagues found that the viewpoint dependency of object recognition was attenuated for objects that had been associated with in-depth semantic associations (Curby et al., 2004), but not for those associated with nonsemantic verbal labels (Collins & Curby, 2013). These findings indicate that changes in visual recognition performance can be attributed to conceptual attributes, and they are consistent with findings indicating that learning to associate in-depth semantic knowledge with novel objects attenuated the recognition deficits of a CSVA patient to a near-normal level (Arguin, Bub, & Dudek, 1996). Similar results have been obtained for face recognition in patients with prosopagnosia (Dixon, Bub, & Arguin, 1998).
What is unclear is when during perceptual decision-making tasks such influences of in-depth semantic knowledge occur. In the perceptual-matching task used for the three studies discussed above, the first object was presented for 1,500 ms, followed immediately by a second object for 180 ms. Semantic knowledge could have improved perceptual-matching performance by facilitating the consolidation of the first stimulus into a durable representation for perceptual comparison, by enabling participants to more efficiently process features of the second stimulus that were diagnostic across changes of viewpoint, or by facilitating the integration of the visual features of the second object into a durable form for perceptual comparison. These possibilities are by no means mutually exclusive, and semantic knowledge likely contributes to visual object processing through multiple mechanisms. Because of their high temporal resolution, electrophysiological findings are useful in elucidating when during stimulus processing such influences of in-depth semantic knowledge on visual processing emerge.
When does semantic knowledge influence perception? Insights from electrophysiological studies
Because of its role in the holistic processing of faces (Sagiv & Bentin, 2001) recent research has focused on the influence of facial familiarity on the N170 component. The N170 is a negative going component that is larger for faces than for other objects over lateral occipital electrode sites, with a peak at approximately 170 ms (Sagiv & Bentin, 2001). Due to the sensitivity of the N170 component to portrait manipulations (i.e., turning a stimulus upside down), the N170 component has been related to the structural encoding of stimuli (i.e., recognition that is driven by the configuration of the parts of a stimulus), as well as the initial categorization of face stimuli (Itier & Taylor, 2002; Sagiv & Bentin, 2001). Although some studies have revealed modulations of the N170 component by face familiarity (Caharel et al., 2002; Heisz & Shedden, 2009; Herzmann, Schweinberger, Sommer, & Jentzsch, 2004; Jemel, Pisani, Calabria, Crommelinck, & Bruyer, 2003), other studies have not (Bentin & Deouell, 2000; Eimer, 2000; Schweinberger, Pickering, Burton, & Kaufmann, 2002) thus suggesting that influences of familiarity on the structural encoding of face stimuli are tenuous at best. The studies that have utilized training procedures to assess the independent contribution of conceptual knowledge to the N170 response, while controlling for visual familiarity confounds, are also inconsistent in their findings. Two studies have shown that faces associated with in-depth biographical information elicit greater N170 components than faces without such information (Galli, Feurra, & Viggiano, 2006; Herzmann & Sommer, 2010). However, two studies that used similar training procedures revealed similar amplitude N170 responses to faces with and without learned associations (Kaufmann, Schweinberger, & Burton, 2009; Paller, Gonsalves, Grabowecky, Bozic, & Yamada, 2000).
Further insight into the contribution of conceptual knowledge to the structural encoding of face stimuli can be gained through studies of the N170 repetition effect. The N170 repetition effect is thought to reflect the identification of a stimulus on the basis of its perceptual features, and several studies have shown that the N170 repetition effect is restricted to faces that are unfamiliar in nature (Caharel et al., 2002; Henson et al., 2003). Across two studies (Heisz & Shedden, 2009; Herzmann & Sommer, 2010) faces that were associated with in-depth social knowledge elicited a reduced N170 repetition effect, relative to faces learned without such associations. The results of these two studies suggest that conceptual knowledge modulates the perceptual processing of faces, as reflected by the N170 repetition effect, possibly by allowing semantic representations to contribute to face identification, and thus reducing the perceptual demands of identification.
Two recent studies have shown that conceptual knowledge can penetrate even earlier stages of visual recognition, as revealed by modulations of the P100 component (Abdel Rahman & Sommer, 2008, 2012). The P100 component typically has a poststimulus onset of 60–90 ms, with a peak between 100 and 130 ms. This component is elicited by any visual object, is sensitive to stimuli parameters such as contrast or spatial frequency, and is often considered an indicator of early, visual processing (Itier & Taylor, 2004). Abdel Rahman and Sommer (2008) used a two-part training paradigm in which participants learned semantic information about a class of complex novel objects. In the first part, all stimuli were associated with names and minimal semantic information (whether the item was real or fictitious). In the second part, participants listened to in-depth stories detailing an object’s function while viewing some stimuli, and listened to irrelevant stories while viewing the other stimuli. Thus, stimuli in both conditions were matched for naming, visual exposure, and amount of verbal information, with the only difference between conditions being whether the presented verbal information was informative in nature. The in-depth stories facilitated recognition when these objects were blurred in an identification task, and this effect was associated with an attenuated P100 component. Using a similar training paradigm as that reported in the previous study, Abdel Rahman and Sommer (2012) investigated the influence of in-depth semantic knowledge on the perception of faces. Consistent with their previous findings, faces associated with in-depth semantic knowledge elicited a reduced P100 component, relative to faces associated with only minimal semantic knowledge. Taken together, these studies suggest that the earliest stages of visual analysis are penetrable to influences from higher-order conceptual knowledge. It is interesting to note that no influence of in-depth semantic learning was apparent on the N170 component for the faces in the Abdel Rahman and Sommer (2012) study. This finding is consistent with the findings of Paller et al. (2000) and Kaufmann et al., 2009, and support the suggestion that conceptual knowledge may only influence the processes underlying the N170 component when stimuli are presented twice in rapid succession (Heisz & Shedden, 2009; Herzmann & Sommer, 2010).
To summarize, although influences of conceptual knowledge on the magnitude of the N170 effect for faces are tenuous at best, the N170 repetition effect is reduced for faces with learned semantic knowledge. These results suggest that semantic knowledge may modulate facial representations such that the perceptual demands of identification are reduced. Additional work has shown that in-depth semantic knowledge can facilitate the early evaluation of stimulus features, as reflected by the P100 component. One possibility is that semantic knowledge modulates electrophysiological correlates of visual processing by attracting additional attention to faces or objects that have been associated with knowledge. However, training induced increases in attention would likely increase (Hillyard & Anllo-Vento, 1998; Hopfinger, Luck, & Hillyard, 2004), as opposed to attenuate, the P100 as seen in these studies (Abdel Rahman & Sommer, 2008, 2012). Alternatively, conceptual knowledge may make the visual processes underlying the P100 component more efficient. Such influences of conceptual knowledge on perception may operate by altering the perceptual representation formed for novel objects and faces during training, or by recruiting top-down feedback from higher-order semantic to visual cortical areas, thus offsetting the perceptual demands of visual recognition (Bar et al., 2006). The latter possibility is consistent with research showing that information propagates through the visual stream to parietal and prefrontal cortices extremely quickly (within 30 ms) allowing ample time for areas higher-level brain areas to feedback and modulate activity in visual cortex. The P100 thus likely reflects coordinated activity between multiple cortical areas extending beyond V1 (Foxe & Simpson, 2002).
Where in the brain is semantic knowledge represented?
Mounting evidence indicates that a face patch in the ventral anterior temporal lobe (ATL) has the computational property of integrating complex perceptual representations of faces with socially important semantic knowledge (Olson, McCoy, Klobusicky, & Ross, 2013). Neurons in this face patch are sensitive to small perceptual differences that distinguish the identity of one novel face from another (Anzellotti, Fairhall, & Caramazza, 2013; Kriegeskorte, Formisano, Sorger, & Goebel, 2007; Nestor, Plaut, & Behrmann, 2011). Furthermore, activity in this region is up-regulated when a face is accompanied by certain types of conceptual knowledge, such as knowledge that makes the face conceptually unique (e.g., “This person invented television”; Ross & Olson, 2012), socially unique (a friend), or famous. This sensitivity to fame and friendship is depicted in Fig. 4, which shows the results of a recent meta-analysis and empirical study of these attributes (von der Heide, Skipper, & Olson, 2013). These findings are consistent with a recent single-unit study in macaques demonstrating that neurons in the ventral ATLs represent paired associations between facial identity and abstract semantic knowledge (Eifuku, Nakata, Sugimori, Ono, & Tamura, 2010).
The ATL is also sensitive to nonface stimuli, albeit ones that are associated with social–emotional conceptual information. Skipper, Ross, and Olson (2011) trained participants to associate social or nonsocial concepts (e.g., “friendly” or “bumpy”) with novel objects, and later scanned the participants while they were presented with the objects alone (see Fig. 4). The results showed that stimuli that had previously been associated with social concepts, as compared to nonsocial concepts, activated brain regions commonly activated in social tasks, such as the amygdala, the temporal pole, and medial prefrontal cortex (see also Todorov, Gobbini, Evans, & Haxby, 2007). However, an additional study has shown that activity patterns in the ventral ATLs carry information about the nonsocial conceptual properties of everyday objects, such as where an object is typically found and how the object is typically used (Peelen & Caramazza, 2012). Together these findings suggest that the vATLs may be involved in representing social and non-social conceptual knowledge about faces and objects. The relative sensitivity of the ventral ATLs to other conceptual object properties has remained unexplored and warrants future research.
Influences of semantic knowledge on perception: Summary and conclusions
To summarize, gaining semantic knowledge about objects and faces leads to behavioral and electrophysiological changes that are indicative of more efficient stimulus processing. The neural instantiation of this depends on several factors including the particular stimuli that are being trained as well as the particular associations that are being formed. Because of this, this general process is associated with no specific region or network. Instead, investigators should carefully consider the types of associations that are being created and draw hypotheses about neural processing based on the relevant literature.
It is common knowledge, but worth reiterating, that fMRI activations during task processing do not imply a causal role in the visual or conceptual processing of the object at hand. There is a great deal of variance in findings across the literature, and only some of these activations appear consistently across studies that train subjects to associate semantic knowledge with objects or faces. Consistent activations have been observed in the left IFG (James & Gauthier, 2004; Ross & Olson, 2012), which has been implicated in semantic retrieval and language production; the perirhinal cortex (Barense, Henson, & Graham, 2011), which may aid in the recognition of meaningful objects characterized by multiple overlapping features; and the ventral ATL, which may integrate facial identity with perception-specific conceptual knowledge (Olson et al., 2013; Von Der Heide et al., 2013). It is plausible that activity in these regions feeds back and biases processing in visual areas in the occipital lobe and the posterior temporal lobe. Plausible conduits of rapid feedback include the long-range white matter association tracts, the inferior fronto-occipital fasciculus and the inferior longitudinal fasciculus. The former runs between the frontal lobe and posterior occipital/anterior fusiform, the later between the amygdala and ventral ATL to ventral extrastriate cortex.
Perceiver goals and motivational salience modulate conceptual influences on perception
One way that conceptual knowledge can influence visual processing is by making perceptual representations more motivationally relevant. Here we argue that the current state and goals of the perceiver are critical to determining what stimuli are considered motivationally relevant, and thus selected for visual prioritization (see Fig. 1 for an illustration). Motivational relevance can be conferred by visual properties that are intrinsic to a stimulus, or it may be derived through conceptual learning about the properties of a stimulus. For example, whereas an angry face coming toward us is always motivationally relevant, regardless of the individual, the retrieval of conceptual knowledge may be required for us to prioritize the visual processing of someone we have recently learned is the CEO of our company. A comprehensive understanding of how conceptual knowledge influences visual processing will require a careful consideration of the current goals of the perceiver and the motivational relevance of the stimulus at hand, since both of these factors have been shown to influence the allocation of perceptual resources.
The studies demonstrating that novel faces and objects associated with distinctive, emotional, or high-status associations are more easily recognized and dominate perceptual-processing resources have all been similar, in that these associations likely made the perceptual representations motivationally relevant (Anderson et al., 2011; Collins, Blacker, & Curby, 2013; Ratcliff, Hugenberg, Shriver, & Bernstein, 2011). Similarly, findings showing that novel objects and faces that have been associated with characteristics or names in training procedures elicit increased BOLD responses in the fusiform face area (FFA, a bilateral region in the posterior fusiform gyrus; Gauthier & Tarr, 2002; Gauthier, Tarr, Anderson, Skudlarski, & Gore, 1999; Van Bavel, Packer, & Cunningham, 2008, 2011) may be partially attributable to the increased motivational relevance of these stimuli following training. For instance, Van Bavel et al. (2011) demonstrated that activity in the FFA was increased for faces arbitrarily assigned to an in-group, relative to out-group or unaffiliated faces, and that activity in the FFA for in-group faces predicted subsequent memory for those faces. These findings are consistent with the idea that motivationally relevant stimuli (in this case, due to group membership) receive preferential processing resources, and that these effects can occur through top-down mechanisms, in the absence of perceptual cues signifying emotional salience.
One compelling demonstration of how perceptual and conceptual features can interact with visual processing goals to influence the allocation of visual processing resources is the other-race effect (ORE). It has been argued that the ORE is a type of categorical perception whereby other-race faces are perceived as being more similar because of their category membership. Some research has suggested that the ORE is partially due to the out-group status of other-race faces, leading perceivers to have less motivation to individuate other-race faces (Sporer, 2001). Consistent with this possibility, it has been shown that the racial category of other-race faces is very quickly triggered during face perception (Cloutier, Mason, & Macrae, 2005; Ito & Urland, 2003; Mouchetant-Rostaing & Girard, 2003). Additionally, it has been shown that the ORE is attenuated for faces that are made more motivationally relevant through a shared university affiliation (Hehman, Mania, & Gaertner, 2010) and that the encoding of same-race faces is reduced to the level of other-race faces if they are made less motivationally relevant by being presented on an impoverished background (Shriver, Young, Hugenberg, Bernstein, & Lanter, 2008).
When during perception do influences of racial category on face recognition arise? Behavioral findings have suggested that other-race faces are encoded less configurally than same-race faces (Fallshore & Schooler, 1995; Hancock & Rhodes, 2008; Michel, Corneille, & Rossion, 2007; Michel, Rossion, Han, Chung, & Caldara, 2006; Sangrigoli & De Schonen, 2004). This suggestion has been corroborated by electrophysiological work showing that the N170 component is reduced for other- relative to same-race faces (Balas & Nelson, 2010; Brebner, Krigolson, Handy, Quadflieg, & Turk, 2011; Caharel et al., 2011; He, Johnson, Dovidio, & McCarthy, 2009; Herrmann et al., 2007; Stahl, Wiese, & Schweinberger, 2008, 2010; Walker, Silvert, Hewstone, & Nobre, 2008). Importantly, processing goals shape the influence of race on the N170 component, with one study showing that, relative to same-race faces, the N170 component is attenuated for other-race faces when participants attend to race, and enhanced when participants attend to identity (Senholzi & Ito 2013).
Influences of race on configural processing and the N170 component can occur in the absence of the perceptual cues signifying racial category. Using the composite task, it has been shown that racially ambiguous faces (those that have neither stereotypically Black nor White features) are perceived more holistically when categorized as belonging to the same relative to another race (Michel et al., 2007). Additionally, Caucasian participants elicited an earlier N170 component when viewing Caucasian faces that they were told shared their nationality or university affiliation (Zheng & Segalowitz, 2013). Social–categorical knowledge has been shown to influence even earlier perceptual processes, such as luminance perception. In a particularly revealing study (Levin & Banaji, 2006), it was demonstrated that people consistently misperceive the lightness of faces, such that faces with stereotypically Black features are perceived as being darker than faces with stereotypically White features, even when luminance is tightly controlled. Furthermore, racially ambiguous faces are perceived as being darker when they are paired with the label “Black” relative to “White.” Together, these findings indicate that social knowledge about the racial category of a face can bias the visual encoding of that face at early stages of perception.
Discussion
The goal of this review has been to synthesize the existing literature demonstrating the dynamic interplay between conceptual knowledge and visual perceptual processing. In doing so, we sought to address three questions that we consider fundamental to the understanding of higher-level vision. We will consider each of these in turn.
How are objects represented by the visual system?
We have reviewed studies demonstrating that category knowledge, which is inherently conceptual, can penetrate early stages of visual analysis and engender changes to object representations such that category-relevant features are sensitized within the ventral visual stream. Fundamental information-processing constraints may restrict the stimulus dimensions that can be used to infer category membership and that can bias perceptual representations. Future work should be addressed at understanding the types and amounts of training that are sufficient for category knowledge to alter perceptual representations and the constraints that variations in stimulus complexity may place on category learning.
Where in the brain is object-related conceptual knowledge represented, and how does the activation of conceptual information about an object unfold?
In answering this question, we first considered the embodied account of conceptual knowledge, which holds that concepts are embodied or instantiated in the same neural regions required for specific types of perception and action. This idea has been largely supported by behavioral and neuroimaging findings demonstrating that sensory and motor features of concepts are activated quickly and automatically. Although these findings are interesting in their own right, in that they support a tight coupling between conceptual and perceptual processing systems, it is not clear what role these embodied representations play in cognition. The degree to which concepts are embodied likely varies from person to person, depending on the sensory–motor experience an individual has with a given object, and according to current semantic-processing demands. It will behoove future researchers to design research aimed at understanding the mechanisms through which individual differences in embodied cognition emerge, and how these embodied representations contribute to cognition.
Moreover, the embodied account of conceptual knowledge provides little insight into the neural representation of the in-depth and abstract conceptual knowledge that we frequently have for faces and objects. The few studies that have utilized training paradigms in which novel objects or faces have been associated with in-depth knowledge have revealed a great deal of variance in the findings, with only some activations appearing consistently. Two areas that are consistently activated to faces accompanied by in-depth knowledge are the portions of the ATL and the left IFG, which may reflect the automatic retrieval of concepts, although superior-polar ATL activations appear to be most closely associated with the processing of socially important concepts (Skipper et al., 2011). It is worth reiterating that the neural regions subserving the recognition of meaningful stimuli will depend on (a) the particular stimuli (faces? objects? tools?) and (b) the particular associations linked to these stimuli. Thus, investigators should carefully consider the types of associations that are being created and draw hypotheses about neural processing based on the relevant literature.
What are the consequences of accessing conceptual knowledge for perceptual decision-making about the visual world?
Modular, impenetrable views of perception have difficulty accounting for many of the findings reviewed in this article. For example, studies showing that category learning can modify perceptual representations (De Baene et al., 2008; Folstein et al., 2013; Goldstone, 1994; Goldstone et al., 2001; Notman et al., 2005), that perceptual sensitivity (d') is influenced by language (Meteyard et al., 2007), that social–categorical knowledge can influence luminance perception (Levin & Banaji, 2006), and that semantic knowledge can bias the electrophysiological markers of preattentive visual processing (Abdel Rahman & Sommer, 2008, 2012; Holmes et al., 2009) are all incompatible with modular views in which perception is completely encapsulated from cognition. It remains unclear whether conceptual knowledge influences perceptual processing by modifying perceptual representations within visual cortex, or through top-down feedback from higher-order to sensory areas of the brain (Bar et al., 2006). One training study has demonstrated experience-dependent plasticity for tools within ventral–visual cortex (Weisberg et al., 2007); however, it is unclear whether similar effects would generalize to nontool objects, or to objects with no motor experience.
Conclusions
Each of the bodies of literature reviewed above supports a tight coupling between conceptual and perceptual processing that is incompatible with strong modular views of perception (Pylyshyn, 1999). Conscious perception results from a reverberation between feed-forward and top-down flows of information in the brain (Gilbert & Sigman, 2007). Early visual cortex (V1–V4) has been shown to respond to associative learning (Damaraju, Huang, Barrett, & Pessoa, 2009), and brain areas implicated in learning and memory have been shown to have perceptual capacities (see Graham, Barense, & Lee, 2010, for a review). A more fruitful endeavor in guiding our understanding of visual cognition may be to investigate the representations that are housed in different cortical areas, rather than the alleged specialized tasks performed by those cortical areas (Cowell, Bussey, & Saksida, 2010). If one does away with the assumption that perception and cognition are encapsulated in functionally discrete processing regions, then it is not clear that cognition influencing perception is any more controversial than cognition influencing cognition. The dynamic interactions between processes considered conceptual and those considered perceptual have remained a relatively underexplored area of psychology. It is our hope that future research will be aimed at further understanding the dynamic interplay between conceptual knowledge and visual object processing.
References
Abdel Rahman, R., & Sommer, W. (2008). Seeing what we know and understand: How knowledge shapes perception. Psychonomic Bulletin & Review, 15, 1055–1063. doi:10.3758/PBR.15.6.1055
Abdel-Rahman, R., & Sommer, W. (2012). Knowledge scale effects in face recognition: An electrophysiological investigation. Cognitive, Affective, & Behavioral Neuroscience, 12, 161–174. doi:10.3758/s13415-011-0063-9
Acheson, D. J., Hamidi, M., Binder, J. R., & Postle, B. R. (2011). A common neural substrate for language production and verbal working memory. Journal of Cognitive Neuroscience, 23, 1358–1367. doi:10.1162/jocn.2010.21519
Anderson, E., Siegel, E. H., Bliss-Moreau, E., & Barrett, L. F. (2011). The visual impact of gossip. Science, 17, 1446–1448. doi:10.1126/science.1201574
Angeli, A., Davidoff, J., & Valentine, T. (2008). Face familiarity, distinctiveness, and categorical perception. Quarterly Journal of Experimental Psychology, 61, 690–707.
Anzellotti, S., Fairhall, S. L., & Caramazza, A. (2013). Decoding representations of face identity that are tolerant to rotation. Cerebral Cortex. doi:10.1093/cercor/bht046. Advance online publication.
Arévalo, A. L., Baldo, J. V., & Dronkers, N. F. (2012). What do brain lesions tell us about theories of embodied semantics and the human mirror neuron system? Cortex, 48, 242–254. doi:10.1016/j.cortex.2010.06.001
Arguin, M., Bub, D., & Dudek, G. (1996). Shape integration for visual object recognition and its implication in category-specific visual agnosia. Visual Cognition, 3, 221–275.
Balas, B., & Nelson, C. A. (2010). The role of face shape and pigmentation in other-race face perception: An electrophysiological study. Neuropsychologia, 48, 498–506. doi:10.1016/j.neuropsychologia.2009.10.007
Bar, M., Kassam, K. S., Ghuman, A. S., Boshyan, J., Schmidt, A. M., Dale, A. M., . . . Halgren, E. (2006). Top-down facilitation of visual recognition. Proceedings of the National Academy of Sciences, 103, 449–454. doi:10.1073/pnas.0507062103
Barense, M. D., Henson, R. N. A., & Graham, K. S. (2011). Perception and conception: Temporal lobe activity during complex discriminations of familiar and novel faces and objects. Journal of Cognitive Neuroscience, 23, 3052–3067. doi:10.1162/jocn_a_00010
Barsalou, L. W. (1999). Perceptual symbol systems. Behavioral and Brain Sciences, 22, 577–609. doi:10.1017/S0140525X99002149. disc. 609–660.
Barsalou, L. W. (2008). Grounded cognition. Annual Review of Psychology, 59, 617–645. doi:10.1146/annurev.psych.59.103006.093639
Barsalou, L. W., Simmons, W. K., Barbey, A., & Wilson, C. D. (2003). Grounding conceptual knowledge in modality-specific systems. Trends in Cognitive Sciences, 7, 84–91. doi:10.1016/S1364-6613(02)00029-3
Beauchamp, M. S., Lee, K. E., Haxby, J. V., & Martin, A. (2002). Parallel visual motion processing streams for manipulable objects and human movements. Neuron, 34, 149–159.
Beauchamp, M. S., Lee, K. E., Haxby, J. V., & Martin, A. (2003). fMRI responses to video and point-light displays of moving humans and manipulable objects. Journal of Cognitive Neuroscience, 15, 991–1001.
Beilock, S. L., Lyons, I. M., Mattarella-Micke, A., Nusbaum, H. C., & Small, S. L. (2008). Sports experience changes the neural processing of action language. Proceedings of the National Academy of Sciences of the United States of America, 105, 13269–13273. doi:10.1073/pnas.0803424105
Bentin, S., & Deouell, L. Y. (2000). Structural encoding and identification in face processing: ERP evidence for separate mechanisms. Cognitive Neuropsychology, 17, 35–54. doi:10.1080/026432900380472
Blechert, J., Sheppes, G., Di Tella, C., Williams, H., & Gross, J. J. (2012). See what you think: Reappraisal modulates behavioral and neural responses to social stimuli. Psychological Science, 23, 346–353. doi:10.1177/0956797612438559
Brebner, J. L., Krigolson, O., Handy, T. C., Quadflieg, S., & Turk, D. J. (2011). The importance of skin color and facial structure in perceiving and remembering others: An electrophysiological study. Brain Research, 1388, 123–133. doi:10.1016/j.brainres.2011.02.090
Bruner, J., & Goodman, C. C. (1947). Value and need as organizing factors in perception. Journal of Abnormal Social Psychology, 42, 33–44.
Burton, A. M., Bruce, V., & Hancock, P. J. B. (1999). From pixels to people: A model of familar face recognition. Cognitive Science, 23, 1–31. doi:10.1207/s15516709cog2301_1
Caharel, S., Montalan, B., Fromager, E., Bernard, C., Lalonde, R., & Mohamed, R. (2011). Other-race and inversion effects during the structural encoding stage of face processing in a race categorization task: An event-related brain potential study. International Journal of Psychophysiology, 79, 266–271. doi:10.1016/j.ijpsycho.2010.10.018
Caharel, S., Poiroux, S., Bernard, C., Thibaut, F., Lalonde, R., & Rebai, M. (2002). ERPs associated with familiarity and degree of familiarity during face recognition. International Journal of Neuroscience, 112, 1499–1512.
Calvo-Merino, B., Glaser, D. E., Grèzes, J., Passingham, R. E., & Haggard, P. (2005). Action observation and acquired motor skills: An fMRI study with expert dancers. Cerebral Cortex, 15, 1243–1249. doi:10.1093/cercor/bhi007
Casasola, M. (2005). Can language do the driving? The effect of linguistic input on infants’ categorization of support spatial relations. Developmental Psychology, 41, 183–192. doi:10.1037/0012-1649.41.1.183
Chao, L. L., Haxby, J. V., & Martin, A. (1999). Attribute-based neural substrates in temporal cortex for perceiving and knowing about objects. Nature Neuroscience, 2, 913–919.
Chao, L. L., Weisberg, J., & Martin, A. (2002). Experience dependent modulation of category-related cortical activity. Cerebral Cortex, 12, 545–551.
Chatterjee, A. (2011). Disembodying cognition. Language and Cognition, 2, 79–116. doi:10.1515/LANGCOG.2010.004
Clifford, A., Holmes, A., Davies, I. R. L., & Franklin, A. (2010). Color categories affect pre-attentive color perception. Biological Psychology, 85, 275–282. doi:10.1016/j.biopsycho.2010.07.014
Cloutier, J., Mason, M. F., & Macrae, C. N. (2005). The perceptual determinants of person construal: Reopening the social-cognitive toolbox. Journal of Personality and Social Psychology, 88, 885–894.
Collins, J. A., Blacker, K. J., & Curby, K. M. (2013). Emotional knowledge (eventually) impacts visual processing. Presented at the Annual Conference of the Vision Sciences Society, Naples, FL.
Collins, J. A., & Curby, K. M. (2013). Conceptual knowledge attenuates viewpoint dependency in visual object recognition. Visual Cognition. doi:10.1080/13506285.2013.836138. Advance online publication.
Cowell, R. A., Bussey, T. J., & Saksida, L. M. (2010). Functional dissociations within the ventral object processing pathway: Cognitive modules or a hierarchical continuum? Journal of Cognitive Neuroscience, 22, 2460–2479. doi:10.1162/jocn.2009.21373
Curby, K. M., Hayward, W. G., & Gauthier, I. (2004). Laterality effects in the recognition of depth-rotated novel objects. Cognitive, Affective, & Behavioral Neuroscience, 4, 100–111. doi:10.3758/CABN.4.1.100
Damaraju, E., Huang, Y.-M., Barrett, L. F., & Pessoa, L. (2009). Affective learning enhances activity and functional connectivity in early visual cortex. Neuropsychologia, 47, 2480–2487. doi:10.1016/j.neuropsychologia.2009.04.023
De Baene, W., Ons, B., Wagemans, J., & Vogels, R. (2008). Effects of category learning on the stimulus selectivity of macaque inferiro temporal neurons. Learning and Memory, 15, 717–727. doi:10.1101/lm.1040508
Dixon, M. J., Bub, D. N., & Arguin, M. (1998). Semantic and visual determinants of face recognition in a prosopagnosic patient. Journal of Cognitive Neuroscience, 10, 362–376.
Eifuku, S., Nakata, R., Sugimori, M., Ono, T., & Tamura, R. (2010). Neural correlates of associative face memory in the anterior inferior temporal cortex of monkeys. Journal of Neuroscience, 30, 15085–15096. doi:10.1523/JNEUROSCI.0471-10.2010
Eimer, M. (2000). Event-related brain potentials distinguish processing stages involved in face perception and recognition. Clinical Neurophysiology, 111, 694–705.
Fallshore, M., & Schooler, J. W. (1995). Verbal vulnerability of perceptual expertise. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 1608–1623. doi:10.1037/0278-7393.21.6.1608
Folstein, J., Gauthier, I., & Palmeri, T. J. (2012). How category learning affects object representations: Not all morphspaces stretch alike. Journal of Experimental Psychology: Learning, Memory, and Cognition, 38, 807–820. doi:10.1037/a0025836
Folstein, J. R., Palmeri, T. J., & Gauthier, I. (2013). Category learning increases discriminability of relevant object dimensions in visual cortex. Cerebral Cortex, 23, 814–823. doi:10.1093/cercor/bhs067
Foxe, J. J., & Simpson, G. V. (2002). Flow of activation from V1 to frontal cortex in humans: A framework for defining “early” visual processing. Experimental Brain Research, 142, 139–150. doi:10.1007/s00221-001-0906-7
Franklin, A., Drivonikou, G. V., Bevis, L., Davies, I. R. L., Kay, P., & Regier, T. (2008a). Categorical perception of color is lateralized to the right hemisphere in infants, but to the left hemisphere in adults. Proceedings of the National Academy of Sciences, 105, 3221–3225. doi:10.1073/pnas.0712286105
Franklin, A., Drivonikou, G. V., Clifford, A., Kay, P., Regier, T., & Davies, I. R. L. (2008b). Lateralization of categorical perception of color changes with color term acquisition. Proceedings of the National Academy of Sciences, 105, 18221–18225. doi:10.1073/pnas.0809952105
Galli, G., Feurra, M., & Viggiano, M. P. (2006). “Did you see him in the newspaper?” Electrophysiological correlates of context and valence in face processing. Brain Research, 1119, 190–202.
Gauthier, I., James, T. W., Curby, K. M., & Tarr, M. J. (2003). The influence of conceptual knowledge on visual discrimination. Cognitive Neuropsychology, 20, 507–523. doi:10.1080/02643290244000275
Gauthier, I., & Tarr, M. J. (2002). Unraveling mechanisms for expert object recognition: Bridging brain activity and behavior. Journal of Experimental Psychology: Human Perception and Performance, 28, 431–446. doi:10.1037/0096-1523.28.2.431
Gauthier, I., Tarr, M. J., Anderson, A. W., Skudlarski, P., & Gore, J. C. (1999). Activation of the middle fusiform “face area” increases with expertise in recognizing novel objects. Nature Neuroscience, 2, 568–573.
Gentner, D., & Goldin-Meadow, S. (2003). Language in mind: Advances in the study of language and thought. Cambridge, MA: MIT Press.
Gilbert, A. L., Regier, T., Kay, P., & Ivry, R. B. (2006). Whorf hypothesis is supported in the right visual field but not the left. Proceedings of the National Academy of Sciences, 103, 489–494.
Gilbert, C. D., & Sigman, M. (2007). Brain states: Top-down influences in sensory processing. Neuron, 54, 677–696. doi:10.1016/j.neuron.2007.05.019
Gilchrist, J. C., & Nesberg, L. S. (1952). Need and perceptual change in need-related objects. Journal of Experimental Psychology, 44, 369–376.
Gillebert, C. R., Op de Beeck, H. P., Panis, S., & Wagemans, J. (2008). Subordinate categorization enhances the neural selectivity in human objectselective cortex for fine shape differences. Journal of Cognitive Neuroscience, 21, 1054–1064. doi:10.1162/jocn.2009.21089
Goldberg, R. F., Perfetti, C. A., & Schneider, W. (2006a). Distinct and common cortical activations for multimodal semantic categories. Cognitive, Affective, & Behavioral Neuroscience, 6, 214–222. doi:10.3758/CABN.6.3.214
Goldberg, R. F., Perfetti, C. A., & Schneider, W. (2006b). Perceptual knowledge retrieval activates sensory brain regions. Journal of Neuroscience, 26, 4917–4921. doi:10.1523/JNEUROSCI.5389-05.2006
Goldstone, R. L. (1994). Influences of categorization on perceptual discrimination. Journal of Experimental Psychology: General, 123, 178–200.
Goldstone, R. L., Landy, D., & Brunel, L. C. (2011). Improving perception to make distant connections closer. Frontiers in Psychology, 2, 385. doi:10.3389/fpsyg.2011.00385
Goldstone, R. L., Lippa, Y., & Shiffrin, R. M. (2001). Altering object representations through category learning. Cognition, 78, 27–43.
Goldstone, R. L., Steyvers, M., & Rogosky, B. J. (2003). Conceptual interrelatedness and caricatures. Memory & Cognition, 31, 169–180. doi:10.3758/BF03194377
Graham, K. S., Barense, M. D., & Lee, A. C. H. (2010). Going beyond LTM in the MTL: A synthesis of neuropsychological and neuroimaging findings on the role of the medial temporal lobe in memory and perception. Neuropsychologia, 48, 831–853. doi:10.1016/j.neuropsychologia.2010.01.001
Grill-Spector, K. (2003). The neural basis of object perception. Current Opinion in Neurobiology, 13, 159–166. doi:10.1016/S0959-4388(03)00040-0
Grill-Spector, K., Henson, R., & Martin, A. (2006). Repetition and the brain: Neural models of stimulus-specific effects. Trends in Cognitive Science, 10, 14–23. doi:10.1016/j.tics.2005.11.006
Grill-Spector, K., & Malach, R. (2001). fMR-Adaptation: A tool for studying the functional properties of human cortical neurons. Acta Psychologica, 107, 232–293.
Gumperz, J. J., & Levinson, S. C. (1996). Rethinking linguistic relativity. Cambridge, UK: Cambridge University Press.
Hancock, K. J., & Rhodes, G. (2008). Contact, configural coding, and the other-race effect in face recognition. British Journal of Psychology, 99, 45–56.
Harnard, S. (1987). Category induction and representation. In S. Harnard (Ed.), Categorical perception: The groundwork of cognition (pp. 535–565). New York, NY: Cambridge University Press.
Hauk, O., & Tschentscher, N. (2013). The body of evidence: What can neuroscience tell us about embodied semantics? Frontiers in Psychology, 4, 50. doi:10.3389/fpsyg.2013.00050
He, Y., Johnson, M. K., Dovidio, J. F., & McCarthy, G. (2009). The relation between race-related implicit associations and scalp-recorded neural activity evoked by faces from different races. Social Neuroscience, 4, 426–442. doi:10.1080/17470910902949184
Hehman, E., Mania, E. W., & Gaertner, S. L. (2010). Where the division lies: Common ingroup identity moderates the cross-race-facial recognition effect. Journal of Experimental Social Psychology, 46, 445–448. doi:10.1016/j.jesp.2009.11.008
Heisz, J. J., & Shedden, J. M. (2009). Semantic learning modifies perceptual face processing. Journal of Cognitive Neuroscience, 21, 1127–1134. doi:10.1162/jocn.2009.21104
Henson, R. N., Goshen-Gottstein, Y., Ganel, T., Otten, L. J., Quayle, A., & Rugg, M. D. (2003). Electrophysiological and haemodynamic correlates of face perception, recognition and priming. Cerebral Cortex, 13, 793–805. doi:10.1093/cercor/13.7.793
Herrmann, M. J., Schreppel, T., Jäger, D., Koehler, S., Ehlis, A.-C., & Fallgatter, A. J. (2007). The other-race effect for face perception: An event-related potential study. Journal of Neural Transmission, 114, 951–957. doi:10.1007/s00702-007-0624-9
Herzmann, G., Schweinberger, S. R., Sommer, W., & Jentzsch, I. (2004). What’s special about personally familiar faces? A multimodal approach. Psychophysiology, 41, 688–701. doi:10.1111/j.1469-8986.2004.00196.x
Herzmann, G., & Sommer, W. (2010). Effects of previous experience and associated knowledge on retrieval processes of faces: An ERP investigation of newly learned faces. Brain Research, 1356, 54–72. doi:10.1016/j.brainres.2010.07.054
Hillyard, S. A., & Anllo-Vento, L. (1998). Event-related brain potentials in the study of visual selective attention. Proceedings of the National Academy of Sciences, 95, 781–787.
Hoenig, K., Müller, C., Herrnberger, B., Sim, E.-J., Spitzer, M., Ehret, G., & Kiefer, M. (2011). Neuroplasticity of semantic representations for musical instruments in professional musicians. NeuroImage, 56, 1714–1725. doi:10.1016/j.neuroimage.2011.02.065
Holmes, A., Franklin, A., Clifford, A., & Davies, I. (2009). Neurophysiological evidence for categorical perception of color. Brain and Cognition, 69, 426–434. doi:10.1016/j.bandc.2008.09.003
Holmes, K. J., & Wolff, P. (2012). Does categorical perception in the left hemisphere depend on language? Journal of Experimental Psychology: General, 141, 439–443. doi:10.1037/a0027289 439
Hopfinger, J. B., Luck, S. J., & Hillyard, S. A. (2004). Selective attention: Electrophysiological and neuromagnetic studies. In M. S. Gazzaniga (Ed.), The cognitive neurosciences (3rd ed., pp. 561–574). Cambridge, MA: MIT Press.
Hsu, N. S., Frankland, S. M., & Thompson-Schill, S. L. (2012). Chromacity of color perception and object color knowledge. Neuropsychologia, 50, 327–333. doi:10.1016/j.neuropsychologia.2011.12.003
Itier, R. J., & Taylor, M. J. (2002). Inversion and contrast polarity reversal affect both encoding and recognition processes of unfamiliar faces: A repetition study using ERPs. NeuroImage, 15, 353–372.
Itier, R. J., & Taylor, M. J. (2004). Effects of repetition learning on upright, inverted and contrast-reversed face processing using ERPs. NeuroImage, 21, 1518–1532.
Ito, T. A., & Urland, G. R. (2003). Race and gender on the brain: Electrocortical measures of attention to the race and gender of multiply categorizable individuals. Journal of Personality and Social Psychology, 85, 616–626.
James, T. W., & Gauthier, I. (2003). Auditory and action semantic features activate sensory-specific perceptual brain regions. Current Biology, 13, 1792–1796.
James, T. W., & Gauthier, I. (2004). Brain areas engaged during visual judgments by involuntary access to novel semantic information. Vision Research, 44, 429–439.
Jemel, B., Pisani, M., Calabria, M., Crommelinck, M., & Bruyer, R. (2003). Is the N170 for faces cognitively penetrable? Evidence from repetition priming of Mooney faces of familiar and unfamiliar persons. Cognitive Brain Research, 17, 431–446.
Jiang, X., Bradley, E., Rini, R. A., Zeffiro, T., VanMeter, J., & Riesenhuber, M. (2007). Categorization training results in shape-and category-selective human neural plasticity. Neuron, 53, 891–903. doi:10.1016/j.neuron.2007.02.015
Johnson-Frey, S. H. (2004). The neural bases of complex tool use in humans. Trends in Cognitive Sciences, 8, 71–78.
Kaschak, M. P., Madden, C. J., & Therriault, D. J. (2005). Perception of motion affects language processing. Cognition, 94, 79–89.
Kaufmann, J. M., Schweinberger, S. R., & Burton, A. M. (2009). N250 ERP correlates of the acquisition of face representations across different images. Journal of Cognitive Neuroscience, 21, 625–461. doi:10.1162/jocn.2009.21080
Kiefer, M., Sim, E. J., Herrnberger, B., Grothe, J., & Hoenig, K. (2008). The sound of concepts: Four markers for a link between auditory and conceptual brain systems. Journal of Neuroscience, 28, 12224–12230. doi:10.1523/JNEUROSCI.3579-08.2008
Kikutani, M., Roberson, D., & Hanley, J. R. (2008). What’s in the name? Categorical perception for unfamiliar faces can occur through labeling. Psychonomic Bulletin & Review, 15, 787–794. doi:10.3758/PBR.15.4.787
Kriegeskorte, N., Formisano, E., Sorger, B., & Goebel, R. (2007). Individual faces elicit distinct response patterns in human anterior temporal cortex. Proceedings of the National Academy of Sciences, 104, 20600–20605. doi:10.1073/pnas.0705654104
Kwok, V., Niu, Z. D., Kay, P., Zhou, K., Mo, L., Jin, Z., . . . Tan, L. H. (2011). Learning new color names produces rapid increase in gray matter in the intact adult human cortex. Proceedings of the National Academy of Sciences, 108, 6686–6688. doi:10.1073/pnas.1103217108
Levin, D. T., & Banaji, M. R. (2006). Distortions in the perceived lightness of faces: The role of race categories. Journal of Experimental Psychology: General, 135, 501–512. doi:10.1037/0096-3445.135.4.501
Levin, D. T., & Beale, J. M. (2000). Categorical perception occurs in newly learned faces, other-race faces, and inverted faces. Perception & Psychophysics, 62, 386–401.
Levinson, S. C. (1997). From outer to inner space: Linguistic categories and non-linguistic thinking. In J. Nuyts & E. Pederson (Eds.), Language and conceptualization (pp. 13–45). Cambridge, UK: Cambridge University Press.
Livingston, K. R., Andrews, J. K., & Harnad, S. (1998). Categorical perception effects induced by category learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 732–753. doi:10.1037/0278-7393.24.3.732
Lo Gerfo, E., Oliveri, M., Torriero, S., Salerno, S., Koch, G., & Caltagirone, C. (2008). The influence of rTMS over prefrontal and motor areas in a morphological task: Grammatical vs. semantic effects. Neuropsychologia, 46, 764–770. doi:10.1016/j.neuropsychologia.2007.10.012
Lupyan, G. (2012). Linguistically modulated perception and cognition: The label-feedback hypothesis. Frontiers in Psychology, 3, 54. doi:10.3389/fpsyg.2012.00054
Lupyan, G., Rakison, D. H., & McClelland, J. L. (2007). Language is not just for talking: Labels facilitate learning of novel categories. Psychological Science, 18, 1077–1083. doi:10.1111/j.1467-9280.2007.02028.x
Lupyan, G., & Spivey, M. J. (2008). Perceptual processing is facilitated by ascribing meaning to novel stimuli. Current Biology, 18, 410–412. doi:10.1016/j.cub.2008.02.073
Lupyan, G., & Spivey, M. J. (2010). Redundant spoken labels facilitate perception of multiple items. Attention, Perception, & Psychophysics, 7, 2236–2253. doi:10.1111/j.0956-7976.2005.00787.x
Lupyan, G., & Thompson-Schill, S. L. (2012). The evocative power of words: Activation of concepts by verbal and nonverbal means. Journal of Experimental Psychology: General, 141, 170–186. doi:10.1037/a0024904
Lupyan, G., Thompson-Schill, S. L., & Swingley, D. (2010). Conceptual penetration of visual processing. Psychological Science, 21, 682–691. doi:10.1177/0956797610366099
Macpherson, F. (2012). Cognitive penetration of colour experience: Rethinking the issue in light of an indirect mechanism. Philosophy and Phenomenological Research, 84, 24–62. doi:10.1111/j.1933-1592.2010.00481.x
Mahon, B. Z., & Caramazza, A. (2005). The orchestration of the sensory-motor systems: Clues from neuropsychology. Cognitive Neuropsychology, 22, 480–494. doi:10.1080/02643290442000446
Mahon, B. Z., & Caramazza, A. (2008). A critical look at the embodied cognition hypothesis and a new proposal for grounding conceptual content. Journal of Physiology, 102, 59–70. doi:10.1016/j.jphysparis.2008.03.004
Mahon, B. Z., Milleville, S. C., Negri, G. A. L., Rumiati, R. I., Caramazza, A., & Martin, A. (2007). Action-related properties shape object representations in the ventral stream. Neuron, 55, 507–520. doi:10.1016/j.neuron.2007.07.011
Marsolek, C. J. (1999). Dissociable neural subsystems underlie abstract and specific object recognition. Psychological Science, 10, 111–118.
Marsolek, C. J., & Burgund, E. D. (2008). Dissociable neural subsystems underlie visual working memory for abstract categories and specific exemplars. Cognitive, Affective, & Behavioral Neuroscience, 8, 17–24. doi:10.3758/CABN.8.1.17
Martin, A. (2007). The representation of object concepts in the brain. Annual Review of Psychology, 58, 25–45. doi:10.1146/annurev.psych.57.102904.190143
McMurray, B., Aslin, R. N., Tanenhaus, M. K., Spivey, M. J., & Subik, D. (2008). Gradient sensitivity to within-category variation in words and syllables. Journal of Experimental Psychology: Human Perception and Performance, 34, 1609–1631. doi:10.1037/a0011747
Meteyard, L., Bahrami, B., & Vigliocco, G. (2007). Motion detection and motion verbs: Language affects low-level visual perception. Psychological Science, 18, 1007–1013. doi:10.1111/j.1467-9280.2007.02016.x
Meteyard, L., Zokaei, N., Bahrami, B., & Vigliocco, G. (2008). Visual motion interferes with lexical decision on motion words. Current Biology, 18, 732–733. doi:10.1016/j.cub.2008.07.016
Michel, C., Corneille, O., & Rossion, B. (2007). Race categorization modulates holistic face encoding. Cognitive Science, 31, 911–924. doi:10.1080/03640210701530805
Michel, C., Rossion, B., Han, J., Chung, C.-S., & Caldara, R. (2006). Holistic processing is finely tuned for faces of one’s own race. Psychological Science, 17, 608–615. doi:10.1111/j.1467-9280.2006.01752.x
Mishkin, M., Ungerleider, L. G., & Macko, K. A. (1983). Object vision and spatial vision: Two cortical pathways. Trends in Neurosciences, 6, 414–417. doi:10.1016/0166-2236(83)90190-X
Mo, L., Xu, G. P., Kay, P., & Tan, L. H. (2011). Electrophysiological evidence for the left-lateralized effect of language on preattentive categorical perception of color. Proceedings of the National Academy of Sciences, 108, 14026–14030. doi:10.1073/pnas.1111860108
Mouchetant-Rostaing, Y., & Girard, M. H. (2003). Electrophysiological correlates of age and gender perception on human faces. Journal of Cognitive Neuroscience, 15, 900–910.
Negri, G. A. L., Rumiati, R. I., Zadini, A., Ukmar, M., Mahon, B. Z., & Caramazza, A. (2007). What is the role of motor simulation in action and object recognition? Evidence from apraxia. Cognitive Neuropsychology, 24, 795–816. doi:10.1080/02643290701707412
Neininger, B., & Pulvermüller, F. (2003). Word-category specific deficits after lesions in the right hemisphere. Neuropsychologia, 41, 53–70. doi:10.1016/S0028-3932(02)00126-4
Nestor, A., Plaut, D. C., & Behrmann, M. (2011). Unraveling the distributed neural code of facial identity through spatiotemporal pattern analysis. Proceedings of the National Academy of Sciences, 108, 9998–10003. doi:10.1073/pnas.1102433108
Newell, F. N., & Bülthoff, H. H. (2002). Categorical perception of familiar objects. Cognition, 85, 113–143. doi:10.1016/S0010-0277(02)00104-X
Noppeney, U., Price, C. J., Penny, W. D., & Friston, K. J. (2006). Two distinct neural mechanisms for category-selective responses. Cerebral Cortex, 16, 437–445.
Notman, L. A., Sowden, P. T., & Özgen, E. (2005). The nature of learned categorical perception effects: A psychophysical approach. Cognition, 95, B1–B14. doi:10.1016/j.cognition.2004.07.002
O’Brien, J. L., & Raymond, J. E. (2012). Learned predictiveness speeds visual processing. Psychological Science, 23(4), 359–363. doi:10.1177/0956797611429800
Oliver, R. T., Geiger, E. J., Lewandowski, B. C., & Thompson-Schill, S. L. (2009). Remembrance of things touched: How sensorimotor experience affects the neural instantiation of object form. Neuropsychologia, 47, 239–247. doi:10.1016/j.neuropsychologia.2008.07.027
Olson, I. R., McCoy, D., Klobusicky, E., & Ross, L. A. (2013). Social cognition and the anterior temporal lobes: A review and theoretical framework. Social Cognitive and Affective Neuroscience, 8, 123–133. doi:10.1093/scan/nss119
Paller, K. A., Gonsalves, B., Grabowecky, M., Bozic, V. S., & Yamada, S. (2000). Electrophysiological correlates of recollecting faces of known and unknown individuals. NeuroImage, 11, 98–110.
Pecher, D., Zeelenberg, R., & Barsalou, L. W. (2003). Verifying properties from different modalities for concepts produces switching costs. Psychological Science, 14, 119–124. doi:10.1111/1467-9280.t01-1-01429
Pecher, D., Zeelenberg, R., & Barsalou, L. W. (2004). Sensorimotor simulations underlie conceptual representations: Modality-specific effects of prior activation. Psychonomic Bulletin & Review, 11, 164–167.
Peelen, M. V., & Caramazza, A. (2012). Conceptual object representations in human anterior temproal cortex. Journal of Neuroscience, 32, 15728–15736. doi:10.1523/JNEUROSCI.1953-12.2012
Pulvermüller, F., Hauk, O., Nikulin, V. V., & Ilmoniemi, R. J. (2005). Functional links between motor and language systems. European Journal of Neuroscience, 21, 793–797. doi:10.1111/j.1460-9568.2005.03900.x
Pylyshyn, Z. (1999). Is vision continuous with cognition? The case for cognitive impenetrability of visual perception. Behavioral and Brain Sciences, 22, 341–423.
Radel, R., & Clément-Guillotin, C. (2012). Evidence of motivational influences in early visual perception: Hunger modulates conscious access. Psychological Science, 23, 232–234. doi:10.1177/0956797611427920
Ratcliff, N. J., Hugenberg, K., Shriver, E. R., & Bernstein, M. J. (2011). The allure of status: High-status targets are privileged in face processing and memory. Personality and Social Psychology Bulletin, 37, 1003–1015.
Repetto, C., Colombo, B., Cipresso, P., & Riva, G. (2013). The effects of rTMS over the primary motor cortex: The link between action and language. Neuropsychologia, 51, 8–13. doi:10.1016/j.neuropsychologia.2012.11.001
Richardson, D. (2003). Spatial representations activated during real-time comprehension of verbs. Cognitive Science, 27, 767–780. doi:10.1016/S0364-0213(03)00064-8
Riesenhuber, M., & Poggio, T. (2000). Models of object recognition. Nature Neuroscience, 3, 1199–1204. doi:10.1038/81479
Roberson, D., & Davidoff, J. (2000). The categorical perception of colors and facial expressions: The effect of verbal interference. Memory & Cognition, 28, 977–986.
Roberson, D., Pak, H., & Hanley, J. R. (2008). Categorical perception of colour in the left and right visual field is verbally mediated: Evidence from Korean. Cognition, 107, 752–762.
Ross, L. A., & Olson, I. R. (2012). What’s unique about unique entities? An fMRI investigation of the semantics of famous faces and landmarks. Cerebral Cortex, 22, 2005–2015. doi:10.1093/cercor/bhr274
Sagiv, N., & Bentin, S. (2001). Structural encoding of human and schematic faces: Holistic and part-based processes. Journal of Cognitive Neuroscience, 13, 937–951.
Sangrigoli, S., & De Schonen, S. (2004). Effect of visual experience on face processing: A developmental study of inversion and non-native effects. Developmental Science, 7, 74–87. doi:10.1111/j.1467-7687.2004.00324.x
Schweinberger, S. R., Pickering, E. C., Burton, A. M., & Kaufmann, J. M. (2002). Human brain potential correlates of repitition priming in face and name recognition. Neuropsychologia, 40, 2057–2073.
Senholzi, K. B., & Ito, T. A. (2013). Structural face encoding: How task affects the N170’s sensitivity to race. Social Cognitive and Affective Neuroscience, 8, 937–942. doi:10.1093/scan/nss091
Shriver, E. R., Young, S. G., Hugenberg, K., Bernstein, M. J., & Lanter, J. R. (2008). Class, race, and the face: Social context modulates the cross-race effect in face recognition. Personality and Social Psychology Bulletin, 34, 260–278. doi:10.1177/014616720731045
Siegel, S. (2012). Cognitive penetrability and perceptual justification. Noûs, 46, 201–222. doi:10.1111/j.1468-0068.2010.00786.x
Sigala, N., Gabbiani, F., & Logothetis, N. K. (2002). Visual categorization and object representation in monkeys and humans. Journal of Cognitive Neuroscience, 14, 187–198.
Simmons, W. K., Ramjee, V., Beauchamp, M. S., McRae, K., Martin, A., & Barsalou, L. W. (2007). A common neural substrate for percieving and knwoing about color. Neuropsychologia, 45, 2802–2810. doi:10.1016/j.neuropsychologia.2007.05.002
Skipper, L. M., Ross, L. A., & Olson, I. R. (2011). Sensory and semantic subdivisions within the anterior temporal lobe. Neuropsychologia, 49, 3419–3429. doi:10.1016/j.neuropsychologia.2011.07.033
Smilek, D., Dixon, M. J., & Merikle, P. M. (2006). Revisiting the category effect: The influence of meaning and search strategy on the efficiency of visual search. Brain Research, 1080, 73–90.
Smith, E. E., Myers, N., Sethi, U., Pantazatos, S., Yanagihara, T., & Hirsch, J. (2012). Conceptual representations of perceptual knowledge. Cognitive Neuropsychology, 29, 237–248. doi:10.1080/02643294.2012.706218
Snedeker, J., & Gleitman, L. (2004). Why is it hard to label our concepts? In D. G. Hall & S. R. Waxman (Eds.), Weaving a lexicon (pp. 257–294). Cambridge, MA: MIT Press.
Spelke, E. S. (2003). What makes us smart? Core knowledge and natural language. Language in mind: Advances in the study of language and thought (pp. 277–311). Cambridge, MA: MIT Press.
Sporer, S. L. (2001). Recognizing faces of other ethnic groups: An integration of theories. Psychology, Public Policy, and Law, 7, 36–97.
Stahl, J., Wiese, H., & Schweinberger, S. R. (2008). Expertise and own-race bias in face processing: An event-related potential study. NeuroReport, 19, 583–587. doi:10.1097/WNR.0b013e3282f97b4d
Stahl, J., Wiese, H., & Schweinberger, S. R. (2010). Learning task affects ERP-correlates of the own-race bias, but not recognition memory performance. Neuropsychologia, 48, 2027–2040. doi:10.1016/j.neuropsychologia.2010.03.024
Stokes, D. (2012). Perceiving and desiring. stokes.mentalpaint.net. Retrieved from http://stokes.mentalpaint.net/Papers_files/Perceiving and Desiring-JULY2010-Unblinded.pdf
Thomas, C., & Baker, C. I. (2013). Teaching an adult brain new tricks: A critical review of evidence for training-dependent structural plasticity in humans. NeuroImage, 73, 225–236. doi:10.1016/j.neuroimage.2012.03.069
Thompson-Schill, S. L. (2003). Neuroimaging studies of semantic memory: Inferring “how” from “where. Neuropsychologia, 41, 280–292. doi:10.1016/S0028-3932(02)00161-6
Todorov, A., Gobbini, M. I., Evans, K. K., & Haxby, J. V. (2007). Spontaneous retrieval of affective person knowledge in face perception. Neuropsychologia, 45, 163–173. doi:10.1016/j.neuropsychologia.2006.04.018
Valdés-Sosa, M., Bobes, M. A., Quiñones, I., Garcia, L., Valdes-Hernandez, P. A., Iturria, Y., . . . Asencio, J. (2011). Covert face recognition without the fusiform–temporal pathways. NeuroImage, 57, 1162–1176. doi:10.1016/j.neuroimage.2011.04.057
Van Bavel, J. J., Packer, D. J., & Cunningham, W. A. (2008). The neural substrates of in-group bias: A functional magnetic resonance imaging investigation. Psychological Science, 19, 1131–1139. doi:10.1111/j.1467-9280.2008.02214.x
Van Bavel, J. J., Packer, D. J., & Cunningham, W. A. (2011). Modulation of the fusiform face area following minimal exposure to motivationally relevant faces: Evidence of in-group enhancement (not out-group disregard). Journal of Cognitive Neuroscience, 23, 3343–3354. doi:10.1162/jocn_a_00016
van Dantzig, S., Pecher, D., Zeelenberg, R., & Barsalou, L. W. (2008). Perceptual processing affects conceptual processing. Cognitive Science, 32, 579–590. doi:10.1080/03640210802035365
van der Linden, M., van Turennout, M., & Indefrey, P. (2010). Formation of category representations in superior temporal sulcus. Journal of Cognitive Neuroscience, 22, 1270–1282. doi:10.1162/jocn.2009.21270
Von Der Heide, R. J., Skipper, L. M., & Olson, I. R. (2013). Anterior temporal face patches: A meta-analysis and empirical study. Frontiers in Human Neuroscience, 7, 17. doi:10.3389/fnhum.2013.00017
Wagner, T., Valero-Cabre, A., & Pascual-Leone, A. (2007). Noninvasive human brain stimulation. Annual Review of Biomedical Engineering, 9, 527–565. doi:10.1146/annurev.bioeng.9.061206.133100
Walker, P. M., Silvert, L., Hewstone, M., & Nobre, A. C. (2008). Social contact and other-race face processing in the human brain. Social Cognitive and Affective Neuroscience, 3, 16–25. doi:10.1093/scan/nsm035
Waxman, S. R., & Markow, D. B. (1995). Words as invitations to form categories: Evidence from 12- to 13-month-old infants. Cognitive Psychology, 29, 257–302. doi:10.1006/cogp.1995.1016
Weisberg, J., Van Turennout, M., & Martin, M. (2007). A neural system for learning about object function. Cerebral Cortex, 17, 513–521. doi:10.1093/cercor/bhj176
Wiesel, T. N., & Hubel, D. H. (1965). Comparison of the effects of unilateral and bilateral eye closure on cortical unit responses in kittens. Journal of Neurophysiology, 28, 1029–1040.
Willems, R. M., Labruna, L., D’Esposito, M., Ivry, R., & Casasanto, D. (2011). A functional role for the motor system in language understanding: Evidence from theta-burst transcranial magnetic stimulation. Psychological Science, 22, 849–854. doi:10.1177/0956797611412387
Winawer, J., Witthoft, N., Frank, M. C., Wu, L., Wade, A. R., & Boroditsky, L. (2007). Russian blues reveal effects of language on color discrimination. Proceedings of the National Academy of Sciences, 104, 7780–7785.
Yoshida, H., & Smith, L. B. (2005). Linguistic cues enhance the learning of perceptual cues. Psychological Science, 16, 90–95. doi:10.1111/j.0956-7976.2005.00787.x
Zheng, X., & Segalowitz, S. (2013). Putting a face in its place: In- and out-group membership alters the N170 response. Social Cognitive and Affective Neuroscience. doi:10.1093/scan/nst069. Advance online publication.
Zhou, K., Mo, L., Kay, P., Kwok, V. P. Y., Ip, T. N. M., & Tan, L. H. (2010). Newly trained lexical categories produce lateralized categorical perception of color. Proceedings of the National Academy of Sciences, 107, 9974–9978. doi:10.1073/pnas.1005669107
Zwaan, R. A., & Taylor, L. J. (2006). Seeing, acting, understanding: Motor resonance in language comprehension. Journal of Experimental Psychology: General, 135, 1–11. doi:10.1037/0096-3445.135.1.1
Author note
We thank Kim Curby for her thoughtful suggestions and feedback on a preliminary version of this article. We also thank Laura Skipper for her comments on embodied cognition. This work was supported by a National Institute of Health grant to I.R.O. [Award No. RO1 MH091113]. The contents are solely the responsibility of the authors and do not necessarily represent the official views of the National Institute of Mental Health or the National Institutes of Health.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Collins, J.A., Olson, I.R. Knowledge is power: How conceptual knowledge transforms visual cognition. Psychon Bull Rev 21, 843–860 (2014). https://doi.org/10.3758/s13423-013-0564-3
Published:
Issue Date:
DOI: https://doi.org/10.3758/s13423-013-0564-3