Research reportNeural representations of graspable objects: are tools special?
Introduction
Tools are a special class of objects. Not only can they be processed for what they are, but also for how they can be used. Gibson [17] defined the word affordances as properties in the environment that are relevant for an animal's goals. Objects can have multiple affordances that define the way they will be grasped. For example, a toothbrush might afford brushing teeth if held by its handle or poking a small hole if held by its bristles. However, a toothbrush reminds us that, although an object may have multiple affordances, it usually has one specific use that is associated with its identity. This functional specificity of tools distinguishes them from other types of objects (e.g., a rock) that may be graspable but do not have a semantic identity tied to an action representation. The unique relationship between object identity and action in tools can help to address questions about the separability and interaction of different visual processing streams for “what” and “how” [44]. Research from neuropsychology [18], [32] and psychophysics [3], [5], [18], [27] supports the notion that systems for phenomenal awareness of objects and visually guided actions are dissociable. However, it is also clear that the systems interact. Creem and Proffitt [8] demonstrated that one condition for interaction is when a visually guided action must conform to a tool's functional identity. The present studies aimed to investigate the contribution of knowledge about an object's function to representations for actions associated with graspable objects.
Recent research using behavioral and neuroimaging paradigms has demonstrated that the visual perception of tools outside of the context of the execution of a grasp is linked to action representations. For example, with a series of behavioral tasks, Tucker and Ellis [62], [63] posited that objects “potentiate” actions even when the goal of a task is not to directly interact with the object. In one study, they demonstrated a Simon effect, finding that the position of visual objects' handles had a significant effect on the speed of key-press responses, although the handle position was irrelevant to the task (deciding if the object was upright or inverted). For example, handle orientation toward the right facilitated the key-press response made with the right hand. The result that viewing an object in a certain position affected the potential for subsequent action suggests that action-related information about objects is represented automatically when an object is viewed. In a more recent study [63], participants viewed objects and decided if they were natural or manufactured. The response measure was to perform a precision or power grasp to “answer” natural or manufactured. Thus, the grasp itself (power or precision) was irrelevant to the task but could be compatible or incompatible with the visual object presented. The results showed that grasp compatibility influenced speed of response, again suggesting that objects may be automatically perceived for their potential actions. The findings were recently supported by a functional magnetic resonance imaging (fMRI) paradigm [25], which found greater activation in parietal, premotor, and inferior prefrontal cortex, with greater reaction time difference between compatible and incompatible trials.
Over recent years, there has been a burst of functional neuroimaging studies that have examined the relationship between perception and action in the context of tools using a variety of tasks such as object viewing and naming, action observation, imagined actions, and decisions about object function [1], [4], [6], [13], [21], [23], [37]. Furthermore, research in monkey neurophysiology has elegantly demonstrated significant links between vision and motor control by defining neurons that are responsive to both perception and action in premotor and parietal cortex [53]. Together, these studies suggest some functional equivalence between observed, imagined, and real actions, but they leave open the question of whether representations associated with tools are influenced by human knowledge of tool function or the tool's visual structure that indicates graspability. We briefly review the literature on the neural representations associated with perception and action in three categories: perception of action-related objects, perception of action, and imagined action as they are directly relevant to the present studies.
Recent fMRI and PET studies have explored neural distinctions in visual recognition of different categories of objects such as animals, faces, houses, and artifacts. A number of these studies have examined the visual recognition of tools. These visual tool studies have varied in their task-goals and control images. Studies have involved viewing and naming tools compared with nonsense objects [43], fractal patterns [21], or other nonmanipulable objects such as animals, faces, and houses [6], [7], [48]. Other recent paradigms have assessed visual tools' influence on attention [29], the processing of a tool's motion [1], or decisions about a tool's function [37] or orientation [23], [64]. In all, these studies have concluded that there are distinct regions in both the ventral and dorsal streams associated with the visual recognition of tools versus other types of objects. Namely, activation has been found in the middle temporal cortex and more medial regions of the fusiform gyrus, as well as the dorsal and ventral premotor and the posterior parietal cortex. Chao and Martin [6] have suggested that the premotor and parietal activation may be associated with the retrieval of information about hand movements associated with manipulable objects. This claim is consistent with similar patterns of activation seen in imagined hand movement tasks.
Primate studies of “mirror neurons,” neurons in the ventral premotor area F5 that fire when a monkey performs a goal-directed action or observes someone else performing that action [53], have contributed to the goal of researchers to define a similar action–recognition system in humans. This link between perception and execution of actions has been proposed as one account for the early developing ability of humans to imitate. Human neuroimaging studies have examined the perception of actions with tasks such as observation of grasping [24], [54] and recognition or imitation of actions [4], [13], [24], [28], [30], [38]. Neural activation has been commonly reported in the posterior parietal cortex and the posterior inferior frontal cortex in the region of Broca's area. This pattern of activity in the inferior frontal cortex has led some to suggest that Broca's area is a human homologue of the ventral premotor area F5 in monkeys [30] and that action recognition and language production share common neural substrates [28]. However, not all action observation studies have found activation in the inferior frontal gyrus; Grezes et al.'s [24] recent data suggest that the human ventral precentral sulcus may better characterize monkey F5 mirror neuron function.
Imagined actions can be categorized into two broad types of tasks: explicit goal-directed actions and spatial decisions that recruit mental body transformations. Goal-directed actions involving explicit imagined grasping, imagined joystick control, and imagined hand/finger-movement tasks have produced activity in supplementary motor area (SMA), anterior cingulate, lateral premotor cortex, inferior frontal gyrus, posterior parietal cortex, and the cerebellum [12], [16], [19]. Some have also found dorsal prefrontal cortex, basal ganglia [16], and primary motor activation [51]. Johnson et al. [33] recently distinguished between motor planning processes and movement simulation in an event-related fMRI imagined grasping task. They found that predominantly left-hemisphere motor-related cortical regions (and right cerebellum) were active in the hand preparation component of the task for both hands compared to activation in bilateral dorsal premotor cortex and posterior parietal cortex contralateral to the imagined limb in grip selection. These results suggest that studies examining motor imagery may be combining neural representations involved in both the planning and execution of imagined actions. A second category of imagined action tasks includes implicit motor imagery, in which an observer is asked to make a spatial decision and, in doing so, recruits motor processing. These types of tasks typically involve handedness or same/different decisions about visually presented hands or objects [39], [41], [47], [52], [66]. These tasks have led to similar regions of activation as explicit goal-directed tasks, e.g., posterior parietal cortex, posterior temporal cortex, premotor, and some primary motor cortex and cerebellar activation. However, more specific neural distinctions have been found based on strategies and spatial frames of reference used in the transformation.
The evidence of shared neural representations for real, imagined, and potential action is supported by Grezes and Decety's [22] meta-analysis on neuroimaging tasks involving motor execution, simulation, observation, and verb generation/tool naming. They found overlapping networks of activation in the SMA, dorsal and ventral premotor cortex, and inferior and superior regions of the parietal cortex. Furthermore, in a recent PET study, Grezes and Decety [23] compared several different types of tasks involving tools. In tasks of tool orientation judgment, mental simulation of grasping and using tools, silent tool naming, and silent verb generation, they found a common network of activation consistent with the findings of the meta-analysis described above. They suggested that representations for action are automatically activated by visual tools regardless of whether the subject had an intention for action, in agreement with the findings of Tucker and Ellis and Tucker et al. [25], [62], [63].
An unanswered question involves the nature of the tool representation that activates motor processing; it could be the semantic knowledge about function that is associated with tools, the inherent graspability of tools based on their visual structure, or an interaction between these variables. Neuroimaging studies finding premotor cortex activation with the visual presentation of graspable objects have used only familiar tools. However, single-unit recording studies in nonhuman primates have found ventral premotor cortex and anterior intraparietal neurons that respond to the visual presentation of many different-shaped graspable objects [45], [46]. Furthermore, some research indicates that there is a direct route from the visual properties of an object to action that bypasses the recruitment of semantic knowledge [56]. The present studies addressed the question of whether representations for action differ for graspable objects that are associated with familiar functions and those that are not.
We examined motor representations associated with graspable objects using fMRI by presenting two classes of objects that varied in their association with specific functions. Images of 3D tools (objects with a familiar functional identity) and 3D shapes (graspable objects with no known function) were presented while participants performed two different tasks, passive viewing and imagined grasping. In the passive viewing task, our goal was to assess whether object and motor processing regions would be activated to the same extent for function-specific and neutral graspable objects. In the imagined grasping task, we examined whether additional activation associated with planning a meaningful action (e.g., grasping a hammer versus grasping a cylinder) would be associated with functionally familiar objects. As a side question, participants were also asked to perform a real finger-clenching task with both hands to identify the hand regions of the primary motor cortex and to assess whether imagined and executed actions shared similar representations in primary motor cortex. In all, our results suggest that tools are special graspable objects. In the passive viewing task, we found evidence for greater motor processing (activation in parietal and premotor cortex) associated with viewing tools compared to neutral graspable shapes. In the imagined grasping task, both tools and shapes led to a network of premotor, posterior parietal, and posterior temporal regions predicted for simulated visually guided actions. However, differences emerged in region and in extent of activation in both the dorsal and ventral streams. These results suggest that an object's functional identity influences its perceived potential for action.
Section snippets
Subjects
Twelve healthy right-handed subjects (aged 21–36, seven male) participated in the experiment. All subjects were naive as to the purpose of the experiment. The experimental procedures were approved by the University of Utah Institutional Review Board, and all participants gave their informed consent before beginning the study.
MRI acquisition
Functional MRI tasks were performed on a Picker Eclipse 1.5-T scanner. EPI images were acquired in a quadrature head coil with slice thickness of 5 mm, FOV of 55.4×25.6 cm,
Viewing graspable objects
In all, the results indicated a distinction in the recruitment of motor processing areas associated with viewing tools and shapes (see Fig. 2 and Table 1). Viewing tools compared to their baseline scrambled images resulted in clusters of activation bilaterally in the ventral temporal lobes as well as left postcentral gyrus, left precentral gyrus (ventral premotor cortex), and the medial frontal gyrus (pre-SMA). This activation in the posterior temporal cortex, posterior parietal cortex, and
Discussion
A number of recent neuroimaging studies involving tools and simulated actions have provided a basis for identifying ventral and dorsal visual processing regions involved in the visual processing of manipulable objects and their associated actions. The posterior middle temporal gyrus and the middle fusiform gyrus have been associated with perceiving and naming tools [1], [7], [34]. The dorsal and ventral premotor cortex and posterior parietal cortex have been implicated in tasks involving
Acknowledgments
We thank Natalie Sargent, Shawn Yeh, and Jayson Neil for help in image processing. This work was supported by a University Funding Incentive Seed Grant, University of Utah, to the first author.
References (66)
- et al.
Parallel visual motion processing streams for manipulable objects and human movements
Neuron
(2002) Do action systems resist visual illusions?
Trends Cogn. Sci.
(2001)- et al.
Representation of manipulable man-made objects in the dorsal stream
Neuroimage
(2000) - et al.
Premotor cortex activation during observation and naming of familiar tools
Neuroimage
(1997) - et al.
Does visual perception of object afford action? Evidence from a neuroimaging study
Neuropsychologia
(2002) - et al.
Activations related to “mirror” and “canonical” neurons in the human brain: an fMRI study
Neuroimage
(2003) - et al.
The human action recognition system and its relationship to Broca's area: an fMRI study
Neuroimage
(2003) - et al.
A kinematic analysis of reaching and grasping movements in a patient recovering from optic ataxia
Neuropsychologia
(1991) - et al.
Selective activation of a parietofrontal circuit during implicitly imagined prehension
Neuroimage
(2002) - et al.
A selective impairment of hand posture for object utilization in apraxia
Cortex
(1995)
A parametric study of mental spatial transformations of bodies
Neuroimage
A parieto-premotor network for object manipulation: evidence from neuroimaging
Exp. Brain Res.
Segregation of cognitive and motor aspects of visual function using induced motion
Percept. Psychophys.
Action observation activates premotor and parietal areas in a somatotopic manner: an fMRI study
Eur. J. Neurosci.
Attribute-based neural substrates in temporal cortex for perceiving and knowing about objects
Nat. Neurosci.
Grasping objects by their handles: a necessary interaction between cognition and action
J. Exp. Psychol. Hum. Percept. Perform.
Grasping novel objects: functional identity influences representations for real and imagined actions
Human brain imaging reveals a parietal area specialized for grasping
Visually guided grasping produces fMRI activation in dorsal but not ventral stream brain areas
Exp. Brain Res.
Mapping motor representations with positron emission tomography
Nature
Brain activity during observation of actions: influence of action content and subject's strategy
Brain
Modality-specific and supramodal mechanisms of apraxia
Brain
Transcranial magnetic stimulation of primary motor cortex affects mental rotation
Cereb. Cortex
Partially overlapping neural networks for real and imagined hand movements
Cereb. Cortex
The Ecological Approach to Visual Perception
A neurological dissociation between perceiving objects and grasping them
Nature
Localization of grasp representation in humans by positron emission tomography
Exp. Brain Res.
Functional anatomy of pointing and grasping in humans
Cereb. Cortex
Functional anatomy of execution, mental simulation, observation, and verb generation of actions: a meta-analysis
Hum. Brain Mapp.
Objects automatically potentiate action: an fMRI study of implicit processing
Eur. J. Neurosci.
Searching for a baseline: functional imaging and the resting human brain
Nat. Rev., Neurosci.
The effect of pictorial illusion on prehension and perception
J. Cogn. Neurosci.
Graspable objects grab attention when the potential for action is recognized
Nat. Neurosci.
Cited by (255)
Planning in amnestic mild cognitive impairment: an fMRI study
2022, Experimental GerontologyNeutral affordances: Task conflict in the affordances task
2022, Consciousness and Cognition