Abstract
Several models of action recognition acknowledge the involvement of distinct grip and goal representations in the processing of others’ actions. Yet, their functional role and temporal organization are still debated. The present priming study aimed at evaluating the relative timing of grip and goal activation during the processing of photographs of object-directed actions. Action could be correct or incorrect owing to grip and/or goal violations. Twenty-eight (Experiment 1) and 25 (Experiment 2) healthy adults judged the correctness of target actions according to object typical use. Target pictures were primed by action pictures sharing the same grip or same goal, both the same grip and same goal or none. Primes were presented for 66 or 300 ms in Experiment 1 and for 120 or 220 ms in Experiment 2. In Experiment 1, facilitative priming effects were observed for goal and grip similarity after 300 ms primes but only for goal after 66 ms primes. In Experiment 2, facilitative priming effects were found for both goal and grip similarity from 120 ms of prime processing. In addition, results from a control condition in Experiment 2 indicated that mere object priming could partially account for goal similarity priming effects, suggesting that object identity may help the observer to make predictions about possible action goals. Findings demonstrate an early and first activation of goal representations, as compared to grip representations, in action decoding, consistent with predictive accounts of action understanding. Future studies should determine to what extent the timing of grip and goal activation is context-sensitive.
Similar content being viewed by others
Notes
A sample size of about 30 participants was chosen to ensure sufficient statistical power (0.80) for anticipated moderate effect sizes (Cohen d = 0.50 for the critical paired comparisons).
“On each trial, you will see two successive pictures showing an actor using an object. The first picture will always be briefly presented. You will have to judge the second photography. You will have to determine, as fast and as accurately as possible, if the presented action is correct or not according to the typical use of the object. The use of an object is atypical when the object is used for another purpose or in another manner as the typical one. You will start with a training in which you will have feedback.”
References
Amoruso L, Urgesi C (2016) Familiarity modulates motor activation while other species’ actions are observed: a magnetic stimulation study. Eur J Neurosci 43:765–772. https://doi.org/10.1111/ejn.13154
Ansuini C, Santello M, Massaccesi S, Castiello U (2005) Effects of End-Goal on Hand Shaping. J Neurophysiol 95:2456–2465. https://doi.org/10.1152/jn.01107.2005
Ansuini C, Cavallo A, Bertone C, Becchio C (2014) The visible face of intention: why kinematics matters. Front Psychol 5:1–6. https://doi.org/10.3389/fpsyg.2014.00815
Avanzini P, Fabbri-Destro M, Campi C et al (2013) Spatiotemporal dynamics in understanding hand–object interactions. Proc Natl Acad Sci USA 110:15878–15885. https://doi.org/10.1073/pnas.1314420110
Bach P, Nicholson T, Hudson M (2014) The affordance-matching hypothesis: how objects guide action understanding and prediction. Front Hum Neurosci 8:254. https://doi.org/10.3389/fnhum.2014.00254
Barr DJ, Levy R, Scheepers C, Tily HJ (2013) Random effects structure for confirmatory hypothesis testing: keep it maximal. J Mem Lang 68:255–278. https://doi.org/10.1016/j.jml.2012.11.001
Barsalou LW (2008) Grounded cognition. Annu Rev Psychol 59:617–645. https://doi.org/10.1146/annurev.psych.59.103006.093639
Barton K (2016) MuMIn: multi-model inference. R package version 1.40.4. https://CRAN.R-project.org/package=MuMIn
Bates D, Kliegl R, Vasishth S, Baayen H (2015a) Parsimonious mixed models, pp 1–27. arXiv Prepr arXiv:1506.04967
Bates D, Mächler M, Bolker B, Walker S (2015b) Fitting linear mixed-effects models using lme4. J Stat Softw. https://doi.org/10.18637/jss.v067.i01
Catmur C (2015) Understanding intentions from actions: direct perception, inference, and the roles of mirror and mentalizing systems. Conscious Cogn. https://doi.org/10.1016/j.concog.2015.03.012
Cattaneo L, Sandrini M, Schwarzbach J (2010) State-dependent TMS reveals a hierarchical representation of observed acts in the temporal, parietal, and premotor cortices. Cereb Cortex 20:2252–2258. https://doi.org/10.1093/cercor/bhp291
Cavallo A, Heyes C, Becchio C et al (2014) Timecourse of mirror and counter-mirror effects measured with transcranial magnetic stimulation. Soc Cogn Affect Neurosci 9:1082–1088. https://doi.org/10.1093/scan/nst085
Cavallo A, Koul A, Ansuini C et al (2016) Decoding intentions from movement kinematics. Sci Rep 6:37036. https://doi.org/10.1038/srep37036
Cooper RP, Ruh N, Mareschal D (2014) The goal circuit model: a hierarchical multi-route model of the acquisition and control of routine sequential action in humans. Cogn Sci 38:244–274. https://doi.org/10.1111/cogs.12067
Geangu E, Senna I, Croci E, Turati C (2015) The effect of biomechanical properties of motion on infants’ perception of goal-directed grasping actions. J Exp Child Psychol 129:55–67. https://doi.org/10.1016/j.jecp.2014.08.005
Gentsch A, Weber A, Synofzik M et al (2016) Towards a common framework of grounded action cognition: relating motor control, perception and cognition. Cognition 146:81–89
Giglio ACA, Minati L, Boggio PS (2013) Throwing the banana away and keeping the peel: neuroelectric responses to unexpected but physically feasible action endings. Brain Res 1532:56–62. https://doi.org/10.1016/j.brainres.2013.08.017
Grafton ST, Hamilton AFDC (2007) Evidence for a distributed hierarchy of action representation in the brain. Hum Mov Sci 26:590–616. https://doi.org/10.1016/j.humov.2007.05.009
Hrkać M, Wurm MF, Schubotz RI (2014) Action observers implicitly expect actors to act goal-coherently, even if they do not: an fMRI study. Hum Brain Mapp 35:2178–2190. https://doi.org/10.1002/hbm.22319
Hudson M, Nicholson T, Ellis R, Bach P (2016a) I see what you say: prior knowledge of other’s goals automatically biases the perception of their actions. Cognition 146:245–250. https://doi.org/10.1016/j.cognition.2015.09.021
Hudson M, Nicholson T, Simpson WA et al (2016b) One step ahead: the perceived kinematics of others’ actions are biased toward expected goals. J Exp Psychol Gen 145:1–7. https://doi.org/10.1037/xge0000126
Iacoboni M, Molnar-Szakacs I, Gallese V et al (2005) Grasping the intentions of others with one’s own mirror neuron system. PLoS Biol 3:0529–0535. https://doi.org/10.1371/journal.pbio.0030079
Jacob P, Jeannerod M (2005) The motor theory of social cognition: a critique. Trends Cogn Sci 9:21–25. https://doi.org/10.1016/j.tics.2004.11.003
Jacquet PO, Avenanti A (2015) Perturbing the action observation network during perception and categorization of actions’ goals and grips: state-dependency and virtual lesion TMS effects. Cereb Cortex 25:598–608. https://doi.org/10.1093/cercor/bht242
Kalénine S, Shapiro AD, Buxbaum LJ (2013) Dissociations of action means and outcome processing in left-hemisphere stroke. Neuropsychologia 51:1224–1233. https://doi.org/10.1016/j.neuropsychologia.2013.03.017
Kilner JM (2011) More than one pathway to action understanding. Trends Cogn Sci 15:352–357. https://doi.org/10.1016/j.tics.2011.06.005
Kilner JM, Friston KJ, Frith CD (2007) Predictive coding: an account of the mirror neuron system. Cogn Process 8:159–166. https://doi.org/10.1007/s10339-007-0170-2
Kristjansson A (2008) “I know what you did on the last trial”—a selective review of research on priming in visual search. Front Biosci 13:1171. https://doi.org/10.2741/2753
Lepage JF, Tremblay S, Théoret H (2010) Early non-specific modulation of corticospinal excitability during action observation. Eur J Neurosci 31:931–937. https://doi.org/10.1111/j.1460-9568.2010.07121.x
Lewkowicz D, Quesque F, Coello Y, Delevoye-Turrell YN (2015) Individual differences in reading social intentions from motor deviants. Front Psychol 6:1–12. https://doi.org/10.3389/fpsyg.2015.01175
Longo MR, Kosobud A, Bertenthal BI (2008) Automatic imitation of biomechanically possible and impossible actions: effects of priming movements versus goals. J Exp Psychol Hum Percept Perform 34:489–501. https://doi.org/10.1037/0096-1523.34.2.489
Manera V, Becchio C, Schouten B et al (2011) Communicative interactions improve visual detection of biological motion. PLoS One. https://doi.org/10.1371/journal.pone.0014594
Matuschek H, Kliegl R, Vasishth S et al (2017) Balancing type I error and power in linear mixed models. J Mem Lang 94:305–315. https://doi.org/10.1016/j.jml.2017.01.001
Naish KR, Reader AT, Houston-Price C et al (2013) To eat or not to eat? Kinematics and muscle activity of reach-to-grasp movements are influenced by the action goal, but observers do not detect these differences. Exp Brain Res 225:261–275. https://doi.org/10.1007/s00221-012-3367-2
Naish KR, Houston-Price C, Bremner AJ, Holmes NP (2014) Effects of action observation on corticospinal excitability: muscle specificity, direction, and timing of the mirror response. Neuropsychologia 64:331–348. https://doi.org/10.1016/j.neuropsychologia.2014.09.034
Neal A, Kilner JM (2010) What is simulated in the action observation network when we observe actions? Eur J Neurosci 32:1765–1770. https://doi.org/10.1111/j.1460-9568.2010.07435.x
Nicholson T, Roser M, Bach P (2017) Understanding the goals of everyday instrumental actions is primarily linked to object, not motor-kinematic, information: evidence from fMRI. PLoS One 12:1–21. https://doi.org/10.1371/journal.pone.0169700
Novack MA, Wakefield EM, Goldin-Meadow S (2016) What makes a movement a gesture? Cognition 146:339–348. https://doi.org/10.1016/j.cognition.2015.10.014
Oldfield RC (1971) The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia 9:97–113. https://doi.org/10.1016/0028-3932(71)90067-4
Ortigue S, Thompson JC, Parasuraman R, Grafton ST (2009) Spatio-temporal dynamics of human intention understanding in temporo-parietal cortex: a combined EEG/fMRI repetition suppression paradigm. PLoS One. https://doi.org/10.1371/journal.pone.0006962
Quesque F, Coello Y (2015) Perceiving what you intend to do from what you do: evidence for embodiment in social interactions. Socioaffect Neurosci Psychol 5:28602. https://doi.org/10.3402/snp.v5.28602
Quesque F, Lewkowicz D, Delevoye-Turrell YN, Coello Y (2013) Effects of social intention on movement kinematics in cooperative actions. Front Neurorobot 7:14. https://doi.org/10.3389/fnbot.2013.00014
R Core Team (2017) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna
Rizzolatti G, Fogassi L (2014) The mirror mechanism: recent findings and perspectives. Philos Trans R Soc Lond B Biol Sci 369:20130420. https://doi.org/10.1098/rstb.2013.0420
Schenke KC, Wyer NA, Bach P (2016) The things you do: internal models of others’ expected behaviour guide action observation. PLoS One 11:e0158910. https://doi.org/10.1371/journal.pone.0158910
Thill S, Caligiore D, Borghi AM et al (2013) Theories and computational models of affordance and mirror systems: an integrative review. Neurosci Biobehav Rev 37:491–521. https://doi.org/10.1016/j.neubiorev.2013.01.012
Thioux M, Keysers C (2015) Object visibility alters the relative contribution of ventral visual stream and mirror neuron system to goal anticipation during action observation. Neuroimage 105:380–394. https://doi.org/10.1016/j.neuroimage.2014.10.035
Tidoni E, Borgomaneri S, di Pellegrino G, Avenanti A (2013) Action simulation plays a critical role in deceptive action recognition. J Neurosci 33:611–623. https://doi.org/10.1523/JNEUROSCI.2228-11.2013
van Elk M, Van Schie HT, Bekkering H (2008) Conceptual knowledge for understanding other’s actions is organized primarily around action goals. Exp Brain Res 189:99–107. https://doi.org/10.1007/s00221-008-1408-7
van Elk M, Bousardt R, Bekkering H, van Schie HT (2012) Using goal- and grip-related information for understanding the correctness of other’s actions: an ERP study. PLoS One 7:1–8. https://doi.org/10.1371/journal.pone.0036450
van Elk M, van Schie H, Bekkering H (2014) Action semantics: a unifying conceptual framework for the selective use of multimodal and modality-specific object knowledge. Phys Life Rev 11:220–250. https://doi.org/10.1016/j.plrev.2013.11.005
van Schie HT, Bekkering H (2007) Neural mechanisms underlying immediate and final action goals in object use reflected by slow wave brain potentials. Brain Res 1148:183–197. https://doi.org/10.1016/j.brainres.2007.02.085
Wolpert D, Doya K, Kawato M (2003) A unifying computational framework for motor control and social interaction. Philos Trans R Soc Lond B Biol Sci 358:593–602. https://doi.org/10.1098/rstb.2002.1238
Wurm MF, Lingnau A (2015) Decoding actions at different levels of abstraction. J Neurosci 35:7727–7735. https://doi.org/10.1523/JNEUROSCI.0188-15
Wurm MF, Schubotz RI (2012) NeuroImage squeezing lemons in the bathroom: contextual information modulates action recognition. Neuroimage 59:1551–1559. https://doi.org/10.1016/j.neuroimage.2011.08.038
Wurm MF, Schubotz RI (2016) What’s she doing in the kitchen? Context helps when actions are hard to recognize. Psychon Bull Rev. https://doi.org/10.3758/s13423-016-1108-4
Yoon EY, Humphreys GW, Riddoch MJ (2010) The paired-object affordance effect. J Exp Psychol Hum Percept Perform 36:812–824. https://doi.org/10.1037/a0017175
Zentgraf K, Munzert J, Bischoff M, Newman-Norlund RD (2011) Simulation during observation of human actions—theories, empirical studies, applications. Vision Res 51:827–835. https://doi.org/10.1016/j.visres.2011.01.007
Zhang L, Zhang L, Mou X, Zhang D (2011) FSIM: a feature similarity index for image quality assessment. IEEE Trans Image Process 8:2378–2386
Acknowledgements
This work was funded by the French National Research Agency (ANR-16-CE28-0003 and ANR-11-EQPX-0023) and benefited from a regional fellowship (Hauts-de-France) to J. Decroix.
Author information
Authors and Affiliations
Corresponding author
Electronic supplementary material
Below is the link to the electronic supplementary material.
Appendices
Appendix 1: List of objects used in Experiments 1 and 2
Carafe |
Coffee cup |
Fork |
Hairbrush |
Hairdryer |
Hammer |
Knife |
Liquid soap |
Magnifying glass |
Pencil |
Phone |
Screwdriver |
Tea spoon |
Teapot |
Toothbrush |
Torch |
Cream Tube |
Water bottle |
Watering can |
Wine glass |
Appendix 2
Priming paradigms are known to be influenced by low-level visual features (e.g., Kristjansson 2008). Thus, our results may be driven by perceptual differences between conditions and not by the activation of different levels of action representation. For example, earlier and stronger priming effects may be expected between pairs of action pictures that are more similar perceptually. We used the FSIM algorithm developed by Zhang et al. (2011) to assess image similarity based on low-level visual features. The closer to 1 the index is the more pictures are similar perceptually. First, an index of perceptual similarity was computed between each type of prime (correct grip but incorrect goal, incorrect grip but correct goal, incorrect grip and incorrect goal, neutral action-free) and target (correct action target, incorrect action target) and then attributed to each type of prime-target pairs (Grip similar only, goal similar only, all different, neutral). No indices were computed for grip and goal similar pairs, as prime and target were the same exact picture.
Perceptual similarity was modelled as a function of prime-target pairs (Grip similar only, goal similar only, all different, neutral) and RESP (yes, no) as fixed effects and items as random intercepts. Model comparison did not show any interaction between RESP and type of pair [χ2(3) = 2.5356, p = .4689] and no main effect of RESP [χ2(1) = 1.7477, p = .1862]. We did observe, however, a main effect of type of pair [χ2(3) = 213.11, p < .001]. Compared to ‘all different’ pairs, perceptual similarity was higher in ‘similar grip only’ pairs (estimate 0.055, SE = 0.004, t = 12.36, p < .001), but lower in neutral pairs (estimate − 0.044, SE = 0.004, t = − 9.96, p < .001). ‘All different’ and ‘goal similar only’ pairs were not different from each other (estimate 0.007, SE = 0.004, t = 1.67, p = .09).
To assess the relation between grip and goal priming effects and the perceptual similarity indices in the different goal and grip similarity conditions, Spearman’s rank correlations were computed. Table 2 summarizes the results of the analysis.
Appendix 3A: Determination of the random effect structure in mixed-effect models of Experiment 1
Following the recommendation of Barr et al. (2013), we first attempted to fit a model with the maximum random structure, including, for both subjects and items, random intercepts, random slopes for GRIP, GOAL, DURATION, and RESP and random slopes for the four possible interactions between GRIP, GOAL, DURATION, and RESP. As expected, the model was overparametrized in regards to the data available and failed to converge to a stable solution (see Bates et al. 2015a, b; Matuschek et al. 2017). Then, following the latest recommendations of Bates et al. (2015a, b) we determined the optimal random structure supported by the data. First higher order random slopes (random slopes reflecting interactions) were removed. Then we looked for possible further reduction of the remaining random effect structure by conducting a principal component analysis on the random terms of the model using the rePCA function from the RePsychLing package version 0.0.4 developed by Bates et al. (2015a, b). The analysis identified 5 components, three of them being sufficient to explain 100% of variance. The random slope for DURATION was the least representative factor of these three components for subject and was thus removed from the random effect structure of the model. For items, the least representative factor was GRIP and GOAL, and the two factors were then removed from the random effect structure of the model. Consequently, the final model included random intercepts, GRIP, GOAL and RESP as random slopes for subject, and random intercepts, DURATION and RESP as random slopes for items.
The following full model was finally considered in Experiment 1 (bold indicates fixed effects of interest)
Appendix 3B: Determination of random effect structure in mixed-effect models of Experiment 2
As for Experiment 1, the maximum random structure was not suited for the data of Experiment 2. Moreover, the full model did not converge with GRIP, GOAL, DURATION, and RESP as random slopes for participants and items. A first principal component analysis on the random terms of the model indicated that the GRIP, GOAL and DURATION for subjects were the least representative factors of the components explaining 100% of variance and were removed from the analysis. For items, the least representative factors explaining 100% of variance were GRIP and GOAL and were then removed from the analysis. Thus, the final model included random intercepts and random slopes for RESP for subjects, and random intercepts, RESP and DURATION as random slopes for items.
The following full model was finally considered in Experiment 2 (Bold indicates fixed effects of interest)
Rights and permissions
About this article
Cite this article
Decroix, J., Kalénine, S. Timing of grip and goal activation during action perception: a priming study. Exp Brain Res 236, 2411–2426 (2018). https://doi.org/10.1007/s00221-018-5309-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00221-018-5309-0