Visuomotor priming by pictures of hand postures: perspective matters
Introduction
Observing the actions of conspecifics involves predictive motor representations on the side of the observer, even in the absence of the observer’s intention to respond with an imitative or complementary behaviour. Also during observation of graspable objects and tools, motor cortical areas have been shown to code the object in terms of one or more potential actions with these objects [21], [29]. In both instances of motor involvement during observation of actions and objects, actions are internally simulated by the observer [19].
The experimental methods employed over the last decade to study these visuomotor couplings include a number of neurophysiological methods (single cell recordings, [14], [27], [33]; brain imaging methods, see reviews [11], [19], [26], [28]; transcranial magnetic stimulation, [13], [31]) as well as behavioural methods (transfer paradigms, [17], [34], [35], [36]; stimulus-response compatibility paradigms, [3], [4], [7], [32]). As a result, the basic phenomenon of motor involvement during action observation is now well documented, led by the research on ‘mirror neurons’ [14], [27], [33]. This work has further contributed to raising interest in action imitation and observational learning, both of which are likely to build on a mirror system architecture [1], [20], [28], [37], [38].
In this general context, we pursued two aims with the present, behavioural study. Firstly, we were seeking to clarify the impact of the observer’s perspective on motor representation, by employing a visuomotor priming task. Secondly, we wanted to gather further evidence for the automaticity of these priming effects.
With respect to our first aim, we used, as prime stimuli, pictures of a hand that matched the end posture of the observer’s own hand when performing the displayed action (‘Own perspective’). We contrasted these with pictures of hand end postures in the perspective of another person, facing the participant with a mirror-symmetric hand posture (‘Other perspective’). The observer’s perspective has not been systematically manipulated in previous research on visuomotor priming. Its study is of practical relevance for the design of displays in observational learning procedures. In addition, we were hoping to gain further insight into the neuro-cognitive mechanisms that underlie the documented priming effects. In the following, we outline the two main explanations offered for these effects, and ask what these might predict for the two perspectives.
Based on the work by Craighero et al. [9] on object priming, we distinguish ‘visuo-motor’ and ‘motor-visual’ priming, and use ‘visuomotor’ as a neutral umbrella term for both. Motor-visual priming was the preferred interpretation in Craighero et al.’s [7] recent study with hand posture primes. They demonstrated that the initiation of a pre-specified reach-to-grasp action can be modulated by pictures of a hand that matched or did not match the planned effector end orientation. According to them, motor preparation biases visual processing. As an underlying neurophysiological mechanism, they suggested that motor preparation not only involves premotor cortical areas, but “should evoke also a representation of the prepared action in visual terms” (p. 498), located in posterior parietal and superior temporal areas. Responses to hand pictures that match this anticipatory visual representation should be facilitated, due to the priming effects of the internal representation on the visual processing of the picture stimuli. Although these priming effects arise, in a strict sense, from competing visual representations, the label ‘motor-visual’ adequately refers to their motor origin, in that the internal expected sensory consequences derive from motor preparation.
Given that, in motor-visual priming, motor and visual representations refer to the actor’s own hand, one would expect ‘Own perspective’ displays to produce stronger effects than ‘Other perspective’ displays for this type of priming. It is thus surprising that Craighero et al. [7] only used hand postures in ‘Other perspective’, which do not resemble the expected sensory consequences as directly as ‘Own perspective’ stimuli. Clearly, a comparison of both perspectives, as undertaken in the present study, is called for.
Craighero et al.’s [7] results are also open to a visuo-motor interpretation. In this account, which has been prevalent in the interpretation of mirror neurons and related behavioural findings, visual hand postures automatically activate a corresponding motor representation, regardless of whether the observer has already prepared a response or not. A visuo-motor account thus includes situations where the observer has little advance knowledge about the visual event (e.g. an unexpected social signal), whereas a motor-visual account presupposes the observer’s own motor preparation.
What can be predicted from a visuo-motor account regarding the impact of perspective? Rizzolatti and Luppino [29] recently suggested that the congruency between premotor neurons and visual descriptions of seen actions in temporal and parietal cortex might take its origin from action execution: “in the case of mirror neurons, the matching should occur between the hand action commanded by a certain motor prototype and the vision, by the agent of the action, of his/her own hand. Once this initial visuomotor link is established, it is progressively generalized to the hands of other individuals” (ibid, p. 897). Thus, associating the actions of others with the observer’s motor repertoire is seen to develop on the basis of links between motor commands and visual input from one’s own hand. Accordingly, also from a visuo-motor interpretation, one might expect a primacy of ‘Own perspective’ displays. However, a lifetime’s experience with body parts in both ‘Own’ and ‘Other perspective’ is likely to result into strong visuo-motor associations for both perspectives. Thus, effects of visuomotor priming should not necessarily differ for the two perspectives. Given the massive exposure to the actions of others, and the need for their rapid interpretation, it is even possible that ‘Other perspective’ displays produce stronger visuomotor priming effects than ‘Own perspective’ displays.
Also in other work with hand displays [3], [4], [32], perspective has not been systematically manipulated. Again, the basic finding in these studies was that responses congruent with a (task-irrelevant) hand posture were initiated faster than responses in the presence of incongruent displays. Stürmer et al. [32] interpreted their results with reference to Greenwald’s [16] ideomotor principle and Prinz’ [23] common coding approach, in the sense that actions become automatically activated by visual events that correspond to their effects, i.e. as a visuo-motor effect. Unlike in motor-visual priming, interference here arises between competing motor representations that are concurrently activated by different features of the display.
Whereas Stürmer et al. [32] used, as prime stimuli, pictures of hand gestures that resembled the participants’ view of their own hand, ‘Other perspective’ stimuli have been used in subsequent work by Brass et al. [3], [4], namely pictures of others’ hands with a lifting index or middle finger. Again, a symbolic instruction was facilitated by finger movement displays that were congruent with the required response [4]. Also these authors entertain a visuo-motor account and explicitly state “that a motor-visual priming mechanism … is unlikely to be able to explain fully the RT patterns of the present experiments” (ibid, p. 139).
In summary, we can expect ‘Own perspective’ primes to exert stronger effects than ‘Other perspective’ primes for motor-visual priming, and presumably equal effects for visuomotor priming. In the available studies, both perspectives proved effective, but a direct comparison between ‘Own’ and ‘Other perspective’ has not yet been undertaken. The results of the present study indicate that priming mechanisms can differ substantially for ‘Own’ and ‘Other perspective’ stimuli.
Turning to the second aim of the present study, we employed a simple response task in order to obtain more clearcut evidence for the automaticity of the priming effects than previously available. Amongst the existing behavioural studies, only Brass et al. [3] used a simple response task, whereas choice response tasks were adopted by Brass et al. [4] and Stürmer et al. [32], and a go/no-go choice task by Craighero et al. [7]. In agreement with Brass et al. [3], we find choice response tasks not as convincing as simple response tasks in providing evidence for automatic response activation by visual stimuli, simply because participants in choice tasks are actively seeking information about the required response from the visual array. The demonstration that this search can be ‘misled’ by stimulus attributes that are task-irrelevant but that specify aspects of the required response, is, in our view, a less convincing indicator of automatic response activation than the impact of the same gesture on a response that does not require further specification. Also the go/no-go task employed by Craighero et al. [7] compromises an interpretation in terms of automatic processing. This is because their task required the visual analysis of precisely that stimulus attribute (hand orientation) which was expected, and shown, to impact on response latencies.
Section snippets
Experiment 1
We adopted a simple response procedure in order to further substantiate the automaticity of priming by observed hand postures. The task was closely modelled after Craighero et al.’s [8], [10] earlier studies on object priming. In addition to perspective of the primes, and congruency between instructed hand orientation and prime, we also manipulated the location where the prime stimuli were shown (on a monitor above the hand’s target location, or, via a mirror, precisely at the target location),
Experiment 2
The main objective of Experiment 2 was to replicate the ‘Own-perspective advantage’ for hand fixation precues. In addition, we wanted to identify the time window over which congruency effects could be observed. Experiment 1 revealed such effects despite the rapid responses typical of simple response tasks. For the automatic modulation of hand orientation, latencies of approximately 300 ms are likely to represent the lower boundary at which congruency effects can be observed (see median split
General discussion
Three main findings were obtained in the present study: firstly, in both experiments priming effects by hand postures were demonstrated in a simple response task. We interpret this as strong evidence for the automatic encoding of the orientation of observed hand postures, as Brass et al. [3] did for finger selection. In other related studies [4], [7], [32], choice or go/no-go tasks were used that compromise an interpretation in the sense of automatic encoding.
Secondly, we have tentatively
Acknowledgements
The experiments were carried out by P. Taylor as part of a doctoral thesis under the supervision of B. Hopkins and S. Vogt. Funding for P. Taylor was provided by a studentship from the ESRC. Hard- and software were partially funded via the Small Grant Scheme at Lancaster University. We are grateful to Clive Barker and Geoff Rushforth for technical support, as well as to the editor and two anonymous reviewers for their helpful suggestions.
References (38)
- et al.
Synthetic brain imaging: grasping, mirror neurons and imitation
Neural Networks
(2000) - et al.
Movement observation affects movement execution in a simple response task
Acta Psychologica
(2001) - et al.
Compatibility between observed and executed finger movements: comparing symbolic, spatial, and imitative cues
Brain and Cognition
(2000) - et al.
Hand action preparation influences the responses to hand pictures
Neuropsychologia
(2002) Neural simulation of action: a unifying mechanism for motor cognition
Neuroimage
(2001)- et al.
Premotor cortex and the recognition of motor actions
Cognitive Brain Research
(1996) - et al.
The cortical motor system
Neuron
(2001) - et al.
I know what you are doing: a neurophysiological study
Neuron
(2001) Imagery and perception-action mediation in imitative actions
Cognitive Brain Research
(1996)- et al.
Motor cortical activity preceding a memorized movement trajectory with an orthogonal bend
Experimental Brain Research
(1993)
Vision of the hand and environmental context in human prehension
Experimental Brain Research
Visuomotor priming
Visual Cognition
Action for perception: a motor-visual attentional effect
Journal of Experimental Psychology: Human Perception and Performance
Evidence for visuomotor priming effect
NeuroReport
Gaze perception triggers reflexive visuospatial orienting
Visual Cognition
Motor facilitation during action observation: a magnetic stimulation study
Journal of Neurophysiology
Action recognition in the premotor cortex
Brain
Cited by (120)
Differential influence of first- vs. third-person visual perspectives on segmentation and memory of complex dynamic events
2023, Consciousness and CognitionPerspective-dependent activation of frontoparietal circuits during the observation of a static body effector
2021, Brain ResearchCitation Excerpt :Noteworthy, visual information regarding others' bodies recruits the onlooker's motor system even without the actual perception of motion. Indeed, static images of effectors with implied motion are sufficient to trigger visuomotor priming effects in the observers, facilitating overt responses by images of congruent relative to incongruent movements (e.g., Craighero et al., 2002; Vogt et al., 2003). Accordingly, source analysis of scalp event-related potentials (ERPs) has shown stronger activation in regions of the action representation system, including the premotor and motor cortex, for static images of actions with a higher implied motion level than less dynamic actions (Proverbio et al., 2009).
Not so social after all: Video-based acquisition of observational stimulus-response bindings
2021, Acta PsychologicaThe role of perspective in event segmentation
2018, Cognition