Elsevier

Neuropsychologia

Volume 57, May 2014, Pages 196-204
Neuropsychologia

I give you a cup, I get a cup: A kinematic study on social intention

https://doi.org/10.1016/j.neuropsychologia.2014.03.006Get rights and content

Highlights

  • We investigate how affordances are modulated by physical and social contexts.

  • To manipulate physical context we use objects linked by spatial/functional relations.

  • To study social context we vary the other׳s kind of grip and her eye-gaze direction.

  • The Agent׳s giving response is modulated by both the physical and the social contexts.

  • For the getting condition, other׳s hand posture and eye-gaze affect Agent׳s response.

Abstract

While affordances have been intensively studied, the mechanisms according to how their activation is modulated by context are poorly understood. We investigated how the Agent׳s reach-to-grasp movement towards a target-object (e.g. a can) is influenced by the other׳s interaction with a second object (manipulative/functional) and by his/her eye-gaze communication. To manipulate physical context we showed participants two objects that could be linked by a spatial relation (e.g. can-knife, typically found in the same context), or by different functional relations. The functional relations could imply an action to perform with another person (functional–cooperative: e.g. can-glass), or on our own (functional–individual: e.g. can-straw). When objects were not related (e.g. can-toothbrush) participants had to refrain from responding. In order to respond, in the giving condition participants had to move the target object towards the other person, in the getting condition towards their own body.

When participants (Agents) performed a reach-to-grasp movement to give the target object, in presence of eye-gaze communication they reached the wrist׳s acceleration peak faster if the Other previously interacted with the second object in accordance with its conventional use. Consistently participants reached faster the MFA when the objects were related by a functional–individual than a functional–cooperative relation. The Agent׳s getting response strongly affected the grasping component of the movement: in case of eye-gaze sharing, MFA was greater when the other previously performed a manipulative than a functional grip. Results reveal that humans have developed a sophisticated capability in detecting information from hand posture and eye-gaze, which are informative as to the Agent׳s intention.

Introduction

Offering a cup of tea or pouring some juice to somebody who is holding a glass are apparently very simple actions. However, in order to perform actions as simple as the above mentioned ones, we need to possess a lot of sophisticated perceptual, motor and social abilities, which are at the core of our human endowments. These abilities include the capability to be sensitive to the messages objects send to us, i.e. to perceive their affordances. The ability to predict others׳ actions and to plan our own actions is a consequence of what we see, and of the ability to tune ourselves with others, taking into account the actions they are executing. For example, when we see a mug in front of us and another person holding a teapot we might be able to infer – from the combination of the two objects and from the observation of the other׳s action – whether he/she intends to pour some tea in our cup. If this is the case we could decide to facilitate his/her action, for example holding tight our cup, getting closer to him/her, etc. The present study investigates how physical context (i.e. different configurations and relations between pairs of objects) and social context (i.e. the intentions we infer observing actions of others) modulate the kinematics of our movement when we perform goal-directed actions with objects. To investigate the interplay between the information we extract from observation of objects and of others׳ actions we briefly overview two research lines which are relevant for our work, research on affordances and research on joint action, with a special focus on signalling.

In the last years the study of affordances has gained increasing interest in the field of cognitive neuroscience. Starting from the general idea elaborated by Gibson (1979), according to which there are forms of direct perception of action possibilities, scholars moved to investigate specific components of actions evoked by objects, as for example the reaching and the grasping action components evoked by objects differing in size and orientation (Ellis and Tucker, 2000, Tucker and Ellis, 1998, Tucker and Ellis, 2001). Empirical studies on affordances mainly focused on 2D images of single objects (Tucker & Ellis, 1998); in some studies participants were shown real objects (e.g. Tucker & Ellis, 2001) but were not allowed to directly interact with them; in any case, objects were not embedded within a context. Recently authors have focused also on the activation of motor information determined by 3D images of objects located in physical-interactive contexts (Costantini, Ambrosini, Tieri, Sinigaglia, & Committeri, 2010; for kinematics studies on real objects see Mon-Williams and Bingham, 2011, Sartori et al., 2011a), showing that this motor activation is differently enhanced by different action verbs (Costantini, Ambrosini, Scorolli, & Borghi, 2011). At the same time scholars investigated motor information activated by 2D images of objects embedded in a physical and social context. The physical context could be given by a complex scene (e.g. Kalenine et al., in press, Mizelle and Wheaton, 2010, Mizelle and Wheaton, 2011, Mizelle et al., 2013) or by the presence of a further object – typically used together with the first one or typically found in the same situation (Yoon et al., 2010, Borghi et al., 2012, Natraj et al., 2013). In some of these studies objects were embedded also in a sort of social context, given by the image of a hand with different postures in potential interaction with one of the two objects (Yoon et al., 2010, Borghi et al., 2012, Natraj et al., 2013). An fMRI study by Iacoboni et al. (2005) is relevant to the present one: the authors presented three kinds of stimuli: grasping hand actions without a context, context only (scenes containing objects), and grasping hand actions on a cup performed in two different contexts. In the latter condition the hand posture (either manipulative or functional) and the context suggested the final aim of the grasping action (drinking or cleaning). Actions presented within a context activated the premotor mirror neuron areas, revealing that these areas are activated during comprehension of the intention of others.

This evidence demonstrates that activation of affordances is modulated not just by the physical context (by the scene in which objects are embedded and by the different relations between object pairs) but also by the social one: context, hand-posture and kinematics information are used by the observer to recognise the motor intention of another agent; all these cues can be exploited to anticipate others׳ behaviour during social interaction (for a recent review on neuro-scientific literature on intentional actions see Bonini, Ferrari, & Fogassi, 2013). Further studies reveal that eye-gaze is an important indicator of others׳ intention (Castiello, 2003, Becchio et al., 2008, Innocenti et al., 2012), as both hand posture and eye gaze are modulated by our current goal (e.g. Tomasello, Carpenter, Call, Behne, & Moll, 2005; for neuroimaging evidence see also Pierno et al., 2006).

Even if these studies have the merit to understand object affordances within a context, the physical context is clearly oversimplified, due either to the 2D presentation and to the static character of the presented images, or to the absence of a scene where objects are embedded. This simplification characterizes even more the social context, where the social dimension is simply suggested through the presentation of the image of a hand with different postures (often limited to the precision and the power grip) in potential interaction with objects (e.g. Vogt et al., 2003, Borghi et al., 2012, Yoon et al., 2010, Vainio et al., 2008, Setti et al., 2009, Iacoboni et al., 2005).

In specific social contexts, the automatic resonance mechanism triggered by the observation of others׳ actions (e.g. Fadiga, Fogassi, Pavesi, & Rizzolatti, 1995) can be disadvantageous. Seeing another person grasping a can to pour orange juice in my glass actually calls for a nonidentical complementary action (see Ocampo et al., 2011, Sartori et al., 2011b, Sartori et al., 2012a, Sartori et al., 2012b). Recent evidence has revealed that the mirror neuron system is activated not only during motor resonance, when we covertly imitate others, but also when we perform complementary actions with others (Newman-Norlund, Noordzij, Meulenbroek, & Bekkering, 2007). Consistently, a basic representational system that codes for imitative and complementary actions underlies joint actions (Knoblich & Sebanz, 2008). During joint activity the partner׳s perspective is implicitly calculated and represented in concert with one׳s own, outside of conscious awareness (for reviews, see Sebanz et al., 2006, Knoblich et al., 2011; for a recent study on early developments in joint action see also Brownell, 2011; for modeling work see Pezzulo & Dindo, 2011). Such work demonstrates that people tend to coordinate themselves in a variety of ways, for example following the same matematical principles in limbs movement (e.g. Schmidt, Carello, & Turvey, 1990) or swaying their body in similar ways during conversation (Shockley, Santana, & Fowler, 2003). Coordination can be emergent or planned. Research in cognitive psychology, for example on the Simon task, has provided compelling demonstration of how people are able to predict the actions of others while performing coordinated tasks. Evidence has shown that people tend to form a shared representation representing both their own task and the task of their coactor (e.g. Sebanz et al., 2006).

In this framework, particularly relevant to our work is literature on signaling. When we have to perform a joint action with somebody, we need to signal our action intention and to tune ourselves with the needs of the other. Paradigmatic examples are studies on infant-directed speech and actions. Evidence on motherese (Kuhl et al., 1997) and on motionese (e.g. Brand, Baldwin, & Ashburn, 2002) show that mothers tune themselves to children׳s needs during learning, for example stressing the vowels during speaking to allow children to better understand them or performing very simple-repetitive movements in their close proximity to capture kids׳ attention. Some recent studies investigate how agents in a dyadic interaction tune themselves to perform a joint action with an object, such as trying to grasp a bottle as synchronously as possible (Sacheli et al., 2012, Sacheli et al., 2013). In a kinematic study, Sacheli et al. (2013) manipulated the role participants could play (leader vs. follower): when they assumed the “leader role” they were instructed to manipulate the bottle without further specifications, while when they played the “follower role” they were told to coordinate with the other performing either imitative or complementary actions. Results showed that leaders tended to render their movements more communicative: they emphasized their movements reducing their variability, to allow the other to easily predict their actions.

Studies on signalling have merit to investigate online adjustments during a joint action. However, they mostly focus on communication and signalling between partners who interact with a single object, manipulating for example their interpersonal relations (Sacheli et al., 2012). As recognized by scholars studying coordination (Knoblich et al., 2011), the role of affordances in emergent coordination with multiple objects has not been investigated.

In the present experiment we combine insights from these two research areas, the study of affordances and the study of joint action and signalling. With respect to previous evidence on affordances the present study presents two novelties. First, we tested participants in an ecological and dynamical setting, using a paradigm that addressed the role of both the physical and the social contexts where participants interacted with a real object while observing the experimenter interacting with another object. Second, we manipulated the agent׳s goals by varying the kind of response. With respect to previous work on signaling and joint action, the present study presents other two novelties. First, we verified how objects suggested individual vs. cooperative actions by manipulating the kind of relations linking pairs of objects. Further, we focused on two kinds of signals, eye gaze and hand posture. While the influence of eye gaze (e.g. Innocenti et al., 2012) and of cooperative or competitive contexts (Georgiou, Becchio, Glover, & Castiello, 2007) on the kinematics of the reach-to-grasp movement has been extensively studied (for the specific effects determined by face/arm observation on participants judgments on social vs. individual actions see also Sartori, Straulino and Castiello, 2011c), highlighting their role for social interaction, to our knowledge nobody so far has investigated the role hand posture can play to signal whether the agent intends to perform an individual or a cooperative action with an object. In addition, we verified how information derived from eye-gaze and posture interacts with information desumed from objects׳ reciprocal relations. Specifically, in a kinematic study we investigated how an agent׳s reach-to-grasp movement towards a target-object (e.g. mug) is influenced by the interaction with another known person (the experimenter). The experimenter moved or used an object (manipulative/functional grip), looking at the participant or at her own hand (i.e. the eye-gaze communication could be present or absent). We selected a manipulative and a functional grip on the basis of previous work (see for example Borghi et al., 2012, Iacoboni et al., 2005): the functional grip is aimed at grasping the object to use it, the manipulative grip is similar in terms of fingers configuration (both grips are power grips) but the object is held from its upper part, as when we move it. The participant had to catch a second object (target object), to give it to the experimenter or to move it towards her own body (Giving/Getting response). The two objects could be linked by a spatial (e.g. mug-kitchen paper), a functional–individual (e.g. mug-teabag) or a functional–cooperative relation (e.g. mug-teapot). In case of no relation (e.g. mug-hairbrush) the participant had to refrain from responding.

The goals of this research are threefold. One is to determine whether the eye gaze and the hand posture of another person interacting with an object are informative cues indicating whether she will perform a cooperative action or an action on her own. The second is to verify whether different kinds of relation between objects evoke different kinds of actions, individual vs. cooperative. The final goal of this work is to determine how the different goals of participants (getting vs. giving an object), together with social cues and relations between objects, modulate the motor responses.

Section snippets

Participants

Twelve students took part in the experiment (mean age 23.81, SD=3.70; 6 women). All were right-handed according to a reduced revised version of the Edinburgh Handedness Inventory (Oldfield, 1971, Williams, 1991) (“which hand you use for writing/throwing/toothbrush/knife (without fork)/computer mouse?”), native Italian speakers with normal or corrected-to-normal vision and were naive as to the purpose of the experiment. The study was carried out along the principles of the Helsinki Declaration

Giving condition

Analyses did not show significant main effects (Eye-gaze Communication: p=.39; Relation between the Objects: p=.95), even if Experimenter’s Grip showed an almost significant effect, p=.06: latencies were slightly shorter after a functional grip (M=386.05 ms) than after a manipulative grip (M=413.43 ms).

The interaction between Eye-gaze Communication and Experimenter’s Grip was significant, F (1,10)=7.25, MSe=3123.17, p<.05. When the eye-gaze communication was absent latencies did not differ after

Discussion

Results from the present study reveal that we are sensitive to the physical context (i.e. the relations between objects) but also to the social one. During the giving response, that is when participants had to grasp the close object (target object: a can or a cup) to move it in the peripersonal space of the experimenter, interaction with the object in accordance with its conventional use (functional grip) anticipated the MFA. In presence of a social request conveyed by the eye-gaze

Acknowledgments

This work was supported by the European Community, project ROSSI: Emergence of communication in RObots through Sensorimotor and Social Interaction (Grant agreement no. 216125). Thanks to Giovanni Pezzulo for suggestions on work on signaling.

References (72)

  • G. Knoblich et al.

    Psychological research on joint action: Theory and data

  • J.C. Mizelle et al.

    Neural activation for conceptual identification of correct versus incorrect tool-object pairs

    Brain Research

    (2010)
  • J.C. Mizelle et al.

    Ventral encoding of functional affordances: A neural pathway for identifying errors in action

    Brain and Cognition

    (2013)
  • N. Natraj et al.

    Context and hand posture modulate the neural dynamics of tool—object perception

    Neuropsychologia

    (2013)
  • R.C. Oldfield

    The assessment and analysis of handedness: The Edinburgh inventory

    Neuropsychologia

    (1971)
  • L. Sartori et al.

    Cues to intention: The role of movement information

    Cognition

    (2011)
  • N. Sebanz et al.

    Joint action: Bodies and minds moving together

    Trends in Cognitive Sciences

    (2006)
  • A. Senju

    Atypical development of spontaneous social cognition in autism spectrum disorders

    Brain and Development.

    (2013)
  • A. Senju et al.

    The eye contact effect: Mechanisms and development

    Trends in Cognitive Sciences

    (2009)
  • A. Setti et al.

    Moving hands, moving entities

    Brain and Cognition

    (2009)
  • M. Tomasello et al.

    Reliance on head versus eyes in the gaze following of great apes and human infants: The cooperative eye hypothesis

    Journal of Human Evolution

    (2007)
  • L. Vainio et al.

    On the relations between action planning, object identification, and motor representations of observed actions and objects

    Cognition

    (2008)
  • S. Vogt et al.

    Visuomotor priming by pictures of hand postures: Perspective matters

    Neuropsychologia

    (2003)
  • C. Begliomini et al.

    Differential cortical activity for precision and whole-hand visually guided grasping in humans

    European Journal of Neuroscience

    (2007)
  • K.M. Bennett et al.

    Upper limb movement differentiation according to taxonomic semantic category

    Neuroreport

    (1998)
  • R.J. Brand et al.

    Evidence for ‘motionese’: Modifications in mothers’ infant-directed action

    Developmental Science

    (2002)
  • C.A. Brownell

    Early Developments in Joint Action

    Review of Philosophy and Psychology

    (2011)
  • U. Castiello

    Understanding other people’s actions: Intention and attention

    Journal of Experimental Psychology: Human Perception and Performance

    (2003)
  • U. Castiello

    The neuroscience of grasping

    Nature Reviews Neuroscience

    (2005)
  • U. Castiello et al.

    A brain-damaged patient with an unusual perceptuomotor deficit

    Nature

    (1995)
  • M. Costantini et al.

    Where does an object trigger an action? An investigation about affordances in space

    Experimental Brain Research

    (2010)
  • M. Costantini et al.

    When objects are close to me: Affordances in the peripersonal space

    Psychonomic Bulletin & Review

    (2011)
  • S.H. Creem-Regehr et al.

    Relating spatial perspective taking to the perception of other’s affordances: Providing a foundation for predicting the future behavior of others

    Frontiers in Human Neuroscience

    (2013)
  • E. De Stefani et al.

    Concatenation of observed grasp phases with observer’s distal movements: A behavioural and TMS study

    PLoS One

    (2013)
  • R. Ellis et al.

    Bodies and other visual objects: The dialectics of reaching toward objects

    Psychological Research

    (2013)
  • R. Ellis et al.

    Micro-affordance: The potentiation of components of action by seen objects

    British Journal of Psychology

    (2000)
  • Cited by (0)

    View full text