Objects and their nouns in peripersonal space
Highlights
► Objects and objects’ name correspond to different motor representations. ► Objects house both stable and temporary action-relevant information. ► Objects’ name house stable action-relevant information.
Introduction
Neuroimaging and behavioural studies have demonstrated that visual perception of objects recruits sensory-motor resources. Such a recruitment has been considered as evidence that objects are represented in terms of the action they elicit (Gallese, 2000, Grafton et al., 1997, Grezes et al., 2003). Behaviourally this would imply that observing an object would lead to the selection of the movements aimed at skillfully acting upon it (e.g., Bub and Masson, 2010, Costantini et al., 2010, Ellis and Tucker, 2000, Tucker and Ellis, 1998, Tucker and Ellis, 2001, Tucker and Ellis, 2004). This selection has been demonstrated by means of the compatibility effect, that is, a decrease of reaction times when participants execute a motor act congruent with that suggested by the observed object. For instance, an apple would evoke a whole hand prehension while a cherry would evoke a precision grip. Such motor activations have been referred to as micro-affordances by Ellis and Tucker (2000). In their terminology micro-affordances are action-relevant object properties whose representation is in part constituted by the partial activation of the motor patterns required to interact with them. However, in order to skillfully interact with an object it is required that it falls within the reaching space of the agent. That is, the object should be near enough to be reached and grasped. This idea is supported by fMRI data showing that reach related areas, in parietal cortex are more responsive to real objects when they are within reach as compared to out of reach (Gallivan, Cavina-Pratesi, & Culham, 2009). Following this line of reasoning, it might be hypothesized that the activation of movements aimed at grasping an object should be stronger when it is located within the reaching space of the agent. This hypothesis was not confirmed in a previous study by Tucker and Ellis (2001). In their study, participants had to signal whether an object, located at a distance of 15 or 200 cm, was natural or manufactured, by producing precision or power grip responses, but notably, without any effective reaching movement. In fact, the response device was constantly held by participants’ hands during the experiment. Their results showed a compatibility effect between the response produced by participants and the type of grip required by the object, regardless of their spatial relation. It is likely that the lack of any interaction between the action evoked by the object and its location might be due to the fact that a mere grip movement does not require the spatial localization of the object (Jeannerod, Arbib, Rizzolatti, & Sakata, 1995). Indeed, space localization is critical for reaching only, in a reach-to-grasp sequence: its successful execution relies on the processing of the spatial relationship between the object to be reached and the body-effector performing the movement (Rizzolatti & Sinigaglia, 2008).
Thus, the aim of the first experiment presented in this study was to reveal whether the lack of the reaching component in Tucker and Ellis's study might account for the failure to find an interaction between the action evoked by visually presented objects and the spatial relation between the agent and the objects. In the first experiment participants had to make speeded responses based on the category of an object. Participants had to signal whether an object, presented either within or outside their reachable space, was natural or manufactured by making reach-to-precision or reach-to-power grasp responses.
Certainly, reachability does matter when participants are presented with real objects, where an actual interaction is possible, or at least, might be simulated. But, what happens when participants are presented with an objects’ name? Previous studies have shown that the compatibility effect occurs even in this case (Bub et al., 2008, Glover et al., 2004, Tucker and Ellis, 2004), suggesting that an object's name, as well as real objects, is able to evoke a suitable motor program. Nevertheless, names are conceptual representations of objects, thus it can be hypothesized that the motor program they are related to should somehow be different from that evoked by the visual presentation of physical objects.
To test this hypothesis we ran a second experiment in which participants had to make reach-to-precision or reach-to-power grasp responses when deciding whether an object, presented either within or outside their reachable space, was congruent with a previously displayed word.
Recently, Borghi and Riggio (2009) proposed a distinction between what they have called stable and temporary action-relevant object properties. The former are related to features like shape and size, determining the type of grip with which the object is typically grasped, while the latter are related to temporary aspects, like orientation and position, which change depending on the way an object is presented with respect to the observer. If a difference does exist between the actions evoked by an object's name and those evoked by a visually presented object, it may lie in the fact that an object's name only recruits stable action-relevant object information while physical objects demand the encoding of both stable and temporary action-relevant object properties.
Section snippets
Participants
20 healthy participants (8 males, mean age 24 years) took part in the experiment. All participants had normal or corrected-to-normal visual acuity, and were right-handed according to self report. Informed consent was given before participation. Participants were naive as to the purpose of the experiment.
Materials
The experimental stimuli were red/cyan anaglyph stereo pictures depicting a 3D room in which there was a table with an object placed on top of it. Anaglyph images are useful to provide a
Experiment 2
In the first experiment we found a compatibility effect for manufactured confined to participants’ reaching space. This finding suggests that action-related information associated with visually presented objects are spatially-constrained, that is, they are evoked provided that the object is actually reachable by the perceiver. In this second experiment we aimed at investigating whether objects’ names are also able to evoke action-related information and, if this is the case, whether such
Discussion
The aim of this study was two-fold. First, it aimed at investigating whether activation of gestures associated with visually presented objects is modulated by their location in space, that is, their being positioned within or outside the perceivers’ reach-ability. Second, it aimed at investigating whether action-related information evoked by visually presented objects is different from that evoked by the objects’ names. Regarding the first question we found that action related information is
Acknowledgements
LR and VG were supported by MIUR (Ministero Italiano dell’Università e della Ricerca). FF and LR and VG were funded by the EU grants ROSSI, and TESIS. We thank Anna M. Borghi for helpful comments concerning this study, and Patricia M. Gough for her help in improving the manuscript. We also thank Ettore Ambrosini for his help in stimuli preparation.
References (41)
- et al.
Categorization and action: What about object consistence?
Acta Psychologica (Amst)
(2010) - et al.
Are visual stimuli sufficient to evoke motor information? Studies with hand primes
Neuroscience Letters
(2007) - et al.
Sentence comprehension and simulation of object temporary, canonical and stable affordances
Brain Research
(2009) - et al.
Evocation of functional and volumetric gestural knowledge by objects and words
Cognition
(2008) Category-specificity in visual object recognition
Cognition
(2009)- et al.
Double dissociation of semantic categories in Alzheimer‘s disease
Brain and Language
(1997) - et al.
Premotor cortex activation during observation and naming of familiar tools
NeuroImage
(1997) - et al.
Grasping objects: The cortical mechanisms of visuomotor transformation
Trends in Neurosciences
(1995) - et al.
Is a picture worth a thousand words? Evidence from concept definitions by patients with semantic dementia
Brain and Language
(1999) - et al.
Commonality of neural representations of words and pictures
NeuroImage
(2011)
Action priming by briefly presented objects
Acta Psychologica
On the relations between action planning, object identification, and motor representations of observed actions and objects
Cognition
Perceptual symbol systems
Behavioral and Brain Sciences
Human anterior intraparietal area subserves prehension: a combined lesion and functional MRI activation study
Neurology
Grasping beer mugs: On the dynamics of alignment effects induced by handled objects
Journal of Experimental Psychology. Human Perception and Performance
The multiple semantics hypothesis: Multiple confusions?
Cognitive Neuropsychology
Functional magnetic resonance imaging reveals the neural substrates of arm transport and grip formation in reach-to-grasp actions in humans
The Journal of Neuroscience
When objects are close to me: Affordances in the peripersonal space
Psychonomic Bulletin & Review
Where does an object trigger an action? An investigation about affordances in space
Experimental Brain Research
On problem solving
Psychological Monographs
Cited by (37)
Influence of colour on object motor representation
2022, NeuropsychologiaCitation Excerpt :Besides, variable affordances are contingently associated with the actions we are about to execute. However, the investigation mainly focused on some stable (e.g., the size, Ellis and Tucker, 2000; Makris et al., 2011; Symes et al., 2008) or variable (e.g., the location, Cardellicchio et al., 2011; Ferri et al., 2011) affordances leaving out other aspects. For instance, the colour of the object (Symes et al., 2005; Tipper et al., 2006), the dangerousness of the object or its consistency (Anelli et al., 2010, 2012; Buccino et al., 2009), but also the familiarity of an object itself (McIntosh and Lashley, 2008), can be relevant for the action and have rarely been considered.
Do my hands prime your hands? The hand-to-response correspondence effect
2020, Acta PsychologicaCitation Excerpt :That is, a smaller effect is expected when the hand is shown in the allocentric rather than the egocentric perspective. To illustrate, if people perceive the object as an “occupied” object on which someone else is acting (as they might do in the allocentric perspective), they could activate motor response codes to a lesser extent given that the object is potentially not available for their own action (see González-Perilli & Ellis, 2015 for the influence of phase kinematics on motor simulation; see also Cardellicchio, Sinigaglia, & Costantini, 2013; Ferri, Riggio, Gallese, & Costantini, 2011 for similar hypotheses manipulating object distance rather than perspective). Experiment 1 was aimed at assessing whether photographed two-handled objects yield the activation of motor responses when they appear as grasped on one side.
Effects of grasp compatibility on long-term memory for objects
2018, Acta PsychologicaConflict between object structural and functional affordances in peripersonal space
2016, CognitionCitation Excerpt :The perceived boundary of the action space delimitates individual peripersonal and extrapersonal spaces, where objects appear as reachable or not reachable, respectively. Critically, several studies have shown that the evocation of motor affordances during object perception is maximum in peripersonal space (Costantini, Ambrosini, Scorolli, & Borghi, 2011; Costantini, Ambrosini, Tieri, Sinigaglia, & Committeri, 2010; Ferri, Riggio, Gallese, & Costantini, 2011; Wamain, Gabrielli, & Coello, 2016; Yang & Beilock, 2011). In Costantini, Ambrosini, Scorolli, et al. (2011), participants had to judge the correctness of object-verb associations (e.g., bottle-pour).
Embodiment Theory
2015, International Encyclopedia of the Social & Behavioral Sciences: Second EditionSyntax matters in shaping sensorimotor activation driven by nouns
2024, Memory and Cognition