Temporal dynamics of action perception: Differences on ERP evoked by object-related and non-object-related actions
Introduction
Several influential models of gesture production suggest that distinct cognitive mechanisms are devoted to the execution of different gesture types. Based on the observation of apraxic patients, such models typically propose two distinct routes for action, a semantic route and a non-semantic route (Buxbaum, 2001, Cubelli et al., 2000, Gonzalez Rothi et al., 1991). The two routes would be differentially involved in the production of meaningless, transitive, and intransitive gestures. While imitation of meaningless gestures can only rely on the direct, non-semantic route for action, execution of both transitive (i.e. object-related) and intransitive (i.e. non-object-related) gestures can tap onto the semantic or non-semantic route. In other words, executing intransitive actions as well as pantomimes of object use may involve semantic representations. However, it is still unclear whether transitive and intransitive gestures rely on distinct cognitive and neural mechanisms.
Distinction between production of transitive and intransitive gestures has been first documented in the neuropsychological literature. Patients with strongly impaired transitive gesture production and relatively preserved intransitive gesture execution have been reported many times following left hemisphere lesions (Dumont et al., 1999, Foundas et al., 1995, Haaland et al., 2000, Rapcsak et al., 1993, Roy et al., 1991). Based on these observations, it has been suggested that transitive and intransitive gesture execution rely on distinct cognitive networks. However, transitive gestures could be simply more difficult to perform than intransitive gestures. Several behavioral results are consistent with this alternative interpretation. Using more refined measures of gesture production accuracy, Carmo and Rumiati (2009) revealed that healthy participants imitated intransitive gestures better than transitive gestures (see also Mozaz, Rothi, Anderson, Crucian, & Heilman, 2002, for similar results). Thus, differences in gesture execution complexity could account for the greater deficits in transitive gesture production frequently reported in apraxic patients.
In this context, neuroimaging studies have tried to identify the neural substrates that would be specific to transitive action planning and execution (Bohlhalter et al., 2009, Culham, 2004, Fridman et al., 2006, Johnson-frey et al., 2005, Króliczak and Frey, 2009). Although both gesture types recruit a left-lateralized fronto-parietal network (but see Bohlhalter et al., 2008 for a right hemispheric dominance for intransitive gestures), some areas of this network have been shown to be more active during preparation and/or execution of transitive compared to intransitive actions (Buxbaum et al., 2007, Culham et al., 2003, Fridman et al., 2006, Haaland et al., 2000, Króliczak and Frey, 2009, Wheaton and Hallett, 2007). As suggested by Króliczak and co-workers in their interpretation (Króliczak & Frey, 2009), the differences observed may also depend on movement complexity since sensory-motor cortex activity and movement complexity are closely linked (Gut et al., 2007). Thus, findings from neuroimaging studies corroborate neuropsychological observations and suggest that the stronger fronto-parietal involvement observed during production of transitive compared to intransitive gestures is probably caused by greater difficulty of transitive gesture execution.
Recently, the pattern of apraxic deficits presented by an autistic child re-fueled the debate on the transitive–intransitive gesture distinction. Ham, Bartolo, Corley, Swanson and Rajendran (2010) reported the case of JK, who exhibited a selective impairment in producing intransitive gestures with normal scores in transitive gesture production. The existence of a double dissociation between the deficits presented by this child and the impairments of apraxic patients showing the opposite pattern suggests that the difference between transitive and intransitive gesture execution goes beyond difficulty.
In the present study, we aimed at investigating the neural correlates of transitive (object-related) and intransitive (non-object-related) action processing in perceptual tasks. We used perceptual tasks for two reasons. First, neuroimaging studies using production tasks lack appropriate baseline conditions for transitive and intransitive gesture comparison (Króliczak & Frey, 2009). Since gesture complexity is not matched between action types, it is tricky to draw conclusions about the specific neural substrates of object-related and non-object-related actions from production data. This limit is less difficult to overcome in perception. Accordingly, we designed perceptual control stimuli that were equivalent to the perceived transitive and intransitive actions in term of visual complexity. Second, in order to keep transitive and intransitive gestures equivalent, objects could not be presented. Moreover, we wanted to avoid pantomime tasks, since there is evidence of partially distinct neural circuits for real and pantomimed gesture execution (Króliczak et al., 2007, Senkfor, 2008). Thus, the use of a perceptual paradigm allowed the assessment of object-related actions without involving objects or pantomimes.
On one hand, the two routes of action models (Buxbaum, 2001, Cubelli et al., 2000, Gonzalez Rothi et al., 1991) suggest that both object-related and non-object-related actions could involve some kind of semantic representations. On the other hand, it has been argued that in many situations, object-related actions require accessing both action and object representations (Buxbaum, 2001, Frey, 2007). This characteristic can obviously not apply to non-object-related actions, suggesting that additional semantic processes are involved in visual perception of objet-related actions. Thus, perception of object-related actions, but not non-object-related actions, would involve the recruitment of object knowledge and in particular object motor features (Buxbaum et al., 2007, Chao and Martin, 2000, Martin, 2007). Based on this idea and on the double dissociation observed in production (Dumont et al., 1999; Foundas et al., 1999; Haaland et al., 2000; Ham et al., 2010; Rapcsak et al., 1993, Roy et al., 1991), differences in cerebral activity during observation and recognition of object-related and non-object-related actions should be expected. In perceptual tasks, neuroanatomical and neuroimaging studies that directly compared object-related and non-object-related actions are even more limited (Agnew et al., 2012, Pazzaglia et al., 2008, Villarreal et al., 2008). Villareal et al. (2008) have reported some differences in the inferior frontal gyrus (IFG) between the action types. However, they have been related to extra-processing demands for non-object-related gesture perception, probably because of the symbolic nature of the gestures presented (e.g., stop, salute, hitch hike, crazy, victory). Recently, Agnew et al. (2012) showed different fMRI responses in frontal and parietal cortices during observation of objet-related compared to meaningless non-object-related actions, but results could be due to the use of meaningless action in the non-related action condition. Indeed, fronto-parietal areas may be more strongly recruited when action processing follows the semantic route, regardless of the type of semantic representation involved. Taken together, patient and fMRI studies have not provided a coherent pattern of data in support of a clear distinction between object-related and non-object-related gesture processing during action production or perception.
On possible reason for the inconsistencies reported may be that the distinction between object-related and non-object-related actions is relatively fine-grained and more visible on the timing of brain activity within the fronto-parietal network. Accordingly, fMRI paradigms would not be best suited to investigate this issue. Thus, we used EEG measurement and particularly Event-Related Potential method (ERP) to assess the temporal dynamics of object-related and non-object-related action processing during perceptual tasks. With EEG, we could determine the specific moment in processing when differences between action types emerged. It was thus possible to discriminate between effects related to visual complexity occurring at early processing stages and semantic effects occurring at later processing stages. Although the neural correlates of action observation have been importantly studied using EEG techniques (e.g. Silas, Levy, Nielsen, Slade, & Holmes, 2010 using whole-body movements, Perry & Bentin, 2009 using hand grasps or Urgen, Plank, Ishiguro, Poizner, & Saygin, 2013 for comparison between human and non-human motion), to the best of our knowledge no EEG paradigm has explicitly contrasted object-related and non-object-related actions before.
In light of previous studies, it was critical for our EEG paradigm to control for differences in stimulus complexity between the two action types. Thus, we used point-light display (PLD) stimuli (Johansson, 1973) in order to control for physical differences between stimuli. Indeed, baseline control PLD stimuli were created for each action type, in which the general movement characteristics (duration, number of points and kinematic of points) were equal to the original action but movement information was meaningless. Moreover, PLD stimuli provided biological movement information only – without giving any object visual information in the case of object-related action – and minimized context effects. Thus, we are able to test the distinction between temporal dynamics of object-related and non-object-related action processing with strictly equivalent stimuli, while controlling for potential differences in stimulus complexity.
Although time-frequency analysis, and in particular mu rhythm modulation, has been successfully used to highlight motor system involvement during observation of PLD of biological movements (Perry and Bentin, 2011, Perry et al., 2010), mu rhythm modulation would not be expected to be sensitive to semantic differences during action observation. Since the objective of the present study was to distinguish between the semantic processes at play during observation of two types of biological movements, we focused our analysis on ERP components.
Several ERP components have been shown to be sensitive to the observation of PLD presenting whole-body intransitive movements (Hirai et al., 2003, Hirai et al., 2005, Jokisch et al., 2005, Krakowski et al., 2011), starting around 200 ms after stimuli onset. We predicted ERP differences between action types at two stages of PLD visual processing. First, on early visual components (P100 and N170) known to reflect analysis of stimuli physical features (Hirai et al., 2003, Hirai et al., 2005, Jokisch et al., 2005), we expected to find differences related to inhomogeneity in stimuli complexity. Second and most critically, differences during perception of the two gesture types were expected on late ERPs components known to be related to object semantic processing. Perception of object-related actions, but not non-object-related actions, would involve the recruitment of some parts of object knowledge, perhaps related to object motor features (Buxbaum et al., 2007, Chao and Martin, 2000, Martin, 2007). Traditionally, semantic processing has been associated with brain responses occurring around 350–400 ms after stimulus onset, often identified as N400 component (for instance, Balconi and Caldiroli, 2011, Proverbio and Riva, 2009, Van Elk et al., 2010). However, recent studies evidenced that stimulus semantic processing could start earlier. For instance, brain responses associated with semantic tasks (e.g., meaningful/meaningless decision, semantic incongruence detection) on action verbs (Moseley et al., 2013, Pulvermüller et al., 2001), action pictures (Meyer, Harrison, & Wuerger, 2013) or object pictures (Lloyd-Jones et al., 2012, Lu et al., 2010, Proverbio et al., 2011) have been reported as early as 250 ms after stimulus onset. Thus, ERP components occurring from 250 ms after stimuli onset were all possible candidates for distinguishing between object-related and non-object-related action processing. In other words, differences during action observation were expected on P3a and/or P3b and/or N400 components.
Finally, we aimed at testing the incidental character of object motor feature selective activation during object-related action processing. To this aim, participants were proposed two distinct tasks. They were instructed either to recognize one given action (specific action recognition task), or to detect the presence of a red point during PLD visual presentation (red point detection task). If object motor feature activation were incidental, ERP differences between object-related and non-related actions would be independent of the requirements of the task.
Section snippets
Participants
Twenty adults (mean age 24.3; age range 19–32; 13 women) participated in the experiment. Data from one participant were removed from the final statistical analysis due to head motion artifacts. The final sample included 19 participants. All participants were right-handed (handedness quotients 60–100%; mean 90%; Oldfield 1971) and had normal or corrected-to-normal visual acuity. None of the participants reported history of dyslexia or any neurological diseases. The experimental procedure was
Results
Main results are presented in Fig. 3. Below are presented the results of the analyses on the individual subject ERPs (variability between subjects). Note that the same pattern of results is visible in the analysis on individual item ERPs (variability between items, all significant P-values <0.05).
Discussion
The main goal of the present work was to investigate whether transitive (object-related) and intransitive (non-object-related) gestures relied on distinct cognitive and neural mechanisms. Based on the assumption that regardless of gesture type, action execution and observation neural circuits largely overlap in several regions of the fronto-parietal cortex (for review, see Caspers, Zilles, Laird, & Eickhoff, 2010; Rizzolatti & Craighero, 2004 but see Hickok and Hauser, 2010, Kalénine et al.,
Acknowledgment
This work was funded by the French National Research Agency (ANR-11-PDOC-0014, ANR-11-EQPX-0023) and also supported by European funds through the program FEDER SCV-IrDIVE. Authors thank the 37 volunteers for their participation in the study and Yann Coello for precious comments on the experiment design.
References (76)
- et al.
Contrast dependency of motion-onset and pattern-reversal VEPs: Interaction of stimulus type, recording site and response component
Vision Research
(1997) - et al.
Semantic violation effect on object-related action comprehension. N400-like event-related potentials for unusual and incorrect use
Neuroscience
(2011) - et al.
Left inferior parietal representations for skilled hand-object interactions: evidence from stroke and corticobasal degeneration
Cortex
(2007) - et al.
Knowledge of object manipulation and object function: dissociations in apraxic and nonapraxic subjects
Brain and Language
(2002) - et al.
Imitation of transitive and intransitive actions in healthy individuals
Brain and Cognition
(2009) - et al.
ALE meta-analysis of action observation and imitation in the human brain
NeuroImage
(2010) - et al.
Representation of manipulable man-made objects in the dorsal stream
NeuroImage
(2000) - et al.
Cognition in action: testing a model of limb apraxia
Brain and Cognition
(2000) - et al.
EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis
Journal of Neuroscience Methods
(2004) - et al.
Enhanced detection of artifacts in EEG data using higher-order statistics and independent component analysis
NeuroImage
(2007)