Swipe om te navigeren naar een ander artikel
The online version of this article (https://doi.org/10.1007/s00426-019-01198-y) contains supplementary material, which is available to authorized users.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Humans are unique in their ability to communicate information through representational gestures which visually simulate an action (eg. moving hands as if opening a jar). Previous research indicates that the intention to communicate modulates the kinematics (e.g., velocity, size) of such gestures. If and how this modulation influences addressees’ comprehension of gestures have not been investigated. Here we ask whether communicative kinematic modulation enhances semantic comprehension (i.e., identification) of gestures. We additionally investigate whether any comprehension advantage is due to enhanced early identification or late identification. Participants (n = 20) watched videos of representational gestures produced in a more- (n = 60) or less-communicative (n = 60) context and performed a forced-choice recognition task. We tested the isolated role of kinematics by removing visibility of actor’s faces in Experiment I, and by reducing the stimuli to stick-light figures in Experiment II. Three video lengths were used to disentangle early identification from late identification. Accuracy and response time quantified main effects. Kinematic modulation was tested for correlations with task performance. We found higher gesture identification performance in more- compared to less-communicative gestures. However, early identification was only enhanced within a full visual context, while late identification occurred even when viewing isolated kinematics. Additionally, temporally segmented acts with more post-stroke holds were associated with higher accuracy. Our results demonstrate that communicative signaling, interacting with other visual cues, generally supports gesture identification, while kinematic modulation specifically enhances late identification in the absence of other cues. Results provide insights into mutual understanding processes as well as creating artificial communicative agents.
Supplementary material 1 (R 6 kb)426_2019_1198_MOESM1_ESM.R
Supplementary material 2 (R 7 kb)426_2019_1198_MOESM2_ESM.R
Supplementary material 3 (TXT 192 kb)426_2019_1198_MOESM3_ESM.txt
Supplementary material 4 (TXT 450 kb)426_2019_1198_MOESM4_ESM.txt
Supplementary material 5 (DOCX 1830 kb)426_2019_1198_MOESM5_ESM.docx
Ansuini, C., Cavallo, A., Koul, A., D’Ausilio, A., Taverna, L., & Becchio, C. (2016). Grasping others’ movements: Rapid discrimination of object size from observed hand movements. Journal of Experimental Psychology: Human Perception and Performance, 42(7), 918–929. https://doi.org/10.1037/xhp0000169. CrossRefPubMed
Bavelas, J., Gerwing, J., Sutton, C., & Prevost, D. (2008). Gesturing on the telephone: Independent effects of dialogue and visibility. Journal of Memory and Language, 58(2), 495–520. https://doi.org/10.1016/j.jml.2007.02.004. CrossRef
Becchio, C., Koul, A., Ansuini, C., Bertone, C., & Cavallo, A. (2018). Seeing mental states: An experimental strategy for measuring the observability of other minds. Physics of Life Reviews. https://doi.org/10.1016/j.plrev.2017.10.002. CrossRefPubMed
Campisi, E., & Özyürek, A. (2013). Iconicity as a communicative strategy: Recipient design in multimodal demonstrations for adults and children. Journal of Pragmatics, 47(1), 14–27. https://doi.org/10.1016/j.pragma.2012.12.007. CrossRef
Cerf, M., Harel, J., Einhäuser, W., & Koch, C. (2007). Predicting human gaze using low-level saliency combined with face detection. NIPS 2007. https://doi.org/10.1016/j.visres.2015.04.007. CrossRef
Csibra, G., & Gergely, G. (2006). Social learning and social cognition: The case for pedagogy. Processes of Change in Brain and Cognitive Development, 21, 249–274.
DeBeer, C., Carragher, M., van Nispen, K., de Ruiter, J., Hogrefe, K., & Rose, M. (2015). Which gesture types make a difference? Interpretation of semantic content communicated by PWA via different gesture types. GESPIN, 4, 89–93.
Galati, A., & Galati, A. (2015). Speakers adapt gestures to addressees’ knowledge: Implications for models of co-speech gesture. Language, Cognition and Neuroscience, 29(4), 435–451. https://doi.org/10.1080/01690965.2013.796397. CrossRef
Gielniak, M. J., & Thomaz, A. L. (2012). Enhancing interaction through exaggerated motion synthesis. In Proceedings of the Seventh Annual ACM/IEEE International Conference on Human– Robot Interaction— HRI’12 (p. 375). New York: ACM Press. http://doi.org/10.1145/2157689.2157813.
Grèzes, J., & Decety, J. (2002). Does visual perception of object afford action? Evidence from a neuroimaging study. Neuropsychologia, 40(2), 212–222. https://doi.org/10.1016/S0028-3932(01)00089-6. CrossRefPubMed
Hershler, O., & Hochstein, S. (2005). At first sight: A high-level pop out effect for faces. Vision Research, 45(13), 1707–1724. https://doi.org/10.1016/J.VISRES.2004.12.021. CrossRefPubMed
Holladay, R. M., Dragan, A. D., & Srinivasa, S. S. (2014). Legible Robot Pointing. In: The 23rd IEEE International Symposium on Robot and Human Interactive Communication, 2014 RO-MAN (pp. 217–223).
Holler, J., & Beattie, G. (2005). Gesture use in social interaction: How speakers’ gestures can reflect listeners’ thinking. In: 2nd Conference of the International Society for Gesture Studies (ISGS): Interacting Bodies (pp. 1–12).
Holler, J., Kelly, S., Hagoort, P., & Özyürek, A. (2012). When gestures catch the eye: The influence of gaze direction on co-speech gesture comprehension in triadic communication. In: N. Miyake, D. Peebles, & R. P. Cooper (Eds.) Proceedings of the 34th Annual Meeting of the Cognitive Science Society (pp. 467–472) Austin, TX: Cognitive Society.
Holler, J., Kokal, I., Toni, I., Hagoort, P., Kelly, S. D., & Ozyurek, A. (2015). Eye’m talking to you: Speakers’ gaze direction modulates co-speech gesture processing in the right MTG. Social Cognitive and Affective Neuroscience, 10(2), 255–261. https://doi.org/10.1093/scan/nsu047. CrossRefPubMed
Iacoboni, M., Molnar-Szakacs, I., Gallese, V., Buccino, G., Mazziotta, J. C., & Rizzolatti, G. (2005). Grasping the intentions of others with one’s own mirror neuron system. PLoS Biology, 3(3), e79. https://doi.org/10.1371/journal.pbio.0030079. CrossRefPubMedPubMedCentral
Kendon, A. (1986). Current issues in the study of gesture. In J.-L. Nespoulous, P. Perron, A. R. Lecours, & T. S. Circle (Eds.), The biological foundations of gestures: Motor and semiotic aspects (1st ed., pp. 23–47). London: Psychology Press.
Kendon, A. (2004). Gesture: Visible actions as utterance. Cambridge: Cambridge University Press. CrossRef
Kita, S., van Gijn, I., & van der Hulst, H. (1998). Movement phases in signs and co-speech gestures, and their transcription by human coders. In Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics) (Vol. 1371, pp. 23–35). Berlin: Springer. http://doi.org/10.1007/BFb0052986.
Naish, K. R., Reader, A. T., Houston-Price, C., Bremner, A. J., & Holmes, N. P. (2013). To eat or not to eat? Kinematics and muscle activity of reach-to-grasp movements are influenced by the action goal, but observers do not detect these differences. Experimental Brain Research, 225(2), 261–275. https://doi.org/10.1007/s00221-012-3367-2. CrossRefPubMed
Osiurak, F., Jarry, C., Baltenneck, N., Boudin, B., & Le Gall, D. (2012). Make a gesture and I will tell you what you are miming. Pantomime recognition in healthy subjects. Cortex, 48(5), 584–592. https://doi.org/10.1016/j.cortex.2011.01.007. CrossRefPubMed
Pezzulo, G., Donnarumma, F., & Dindo, H. (2013). Human sensorimotor communication: A theory of signaling in online social interactions. PLoS ONE, 8(11), e79876. https://doi.org/10.1371/journal.pone.0079876. CrossRefPubMedPubMedCentral
R Core Team (2014). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL http://www.R-project.org/.
Trujillo, J. P., Simanova, I., Bekkering, H., & Özyürek, A. (2018a). Communicative intent modulates production and comprehension of actions and gestures: A Kinect study. Cognition, 180, 38–51. https://doi.org/10.1016/j.cognition.2018.04.003. CrossRefPubMed
van Elk, M., van Schie, H., & Bekkering, H. (2014). Action semantics: A unifying conceptual framework for the selective use of multimodal and modality-specific object knowledge. Physics of Life Reviews, 11(2), 220–250. https://doi.org/10.1016/j.plrev.2013.11.005. CrossRefPubMed
- The communicative advantage: how kinematic signaling supports semantic comprehension
James P. Trujillo
- Springer Berlin Heidelberg
An International Journal of Perception, Attention, Memory, and Action
Print ISSN: 0340-0727
Elektronisch ISSN: 1430-2772