Skip to main content
Log in

Shared Representations as Coordination Tools for Interaction

  • Joint Action: What is Shared?
  • Published:
Review of Philosophy and Psychology Aims and scope Submit manuscript

Abstract

Why is interaction so simple? This article presents a theory of interaction based on the use of shared representations as “coordination tools” (e.g., roundabouts that facilitate coordination of drivers). By aligning their representations (intentionally or unintentionally), interacting agents help one another to solve interaction problems in that they remain predictable, and offer cues for action selection and goal monitoring. We illustrate how this strategy works in a joint task (building together a tower of bricks) and discuss its requirements from a computational viewpoint.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. Influences can be direct, (e.g., moving the arm of the other agent, putting a block in a position that prevents other blocks to be added on), or indirect, so as to change another’s cognitive representations (e.g., asking one to stop, looking repeatedly to its actions to express disapproval or to require help, signaling which brick to take if one is dubious).

  2. Here we are disregarding conventional aspects of this activity, such as traffic rules (e.g., turning clockwise or counterclockwise, giving priority to the right of left).

  3. See Appendix A and Pezzulo and Dindo (2011) for a more complete description using the formalism of Dynamic Bayesian Networks.

  4. Dialogue is itself a joint (communicative) action, and has similar characteristics as the tower game. Clark (1996) discusses extensively how utterances convey communicative intentions, but also serve to negotiate a common ground, resolve conflicts and miscommunication, keep communication in track, and ultimately to achieve (joint) communicative intentions, in addition to providing (common) ground to act jointly in the external world.

  5. Observers can, at the same time, infer information to revise their model of the performer agent (e.g., how skilled it is).

  6. Forming shared representations could be useful in competitive scenarios, too. Indeed, channelizing predictions of the adversary facilitates tricking and feinting it when necessary.

  7. Note also that not all representations for action need to be “mental” objects; action plans can include external representations (Kirsh 2010) as well, such as maps, diagrams to which both agents can refer (e.g., linguistically or by pointing at them), of objects that are under the attention of both agents (such as the two-bricks towers in Figs. 1, 2, and 3).

  8. More precisely, when the source of errors in prediction cannot be attributed to random events, but is more plausibly interpreted as depending on the other agent’s beliefs and intentions.

  9. This formulation is typical of POMDP (Kaelbling et al. 1998). See Bishop (2006) for reference on Bayesian generative systems, and Pezzulo and Dindo (2011) for a more complete treatment of the interactive strategy from a computational viewpoint.

  10. This does not exclude that task representations can be richer; this is the case, for instance, when only one of the participating agents is able to perform a certain action.

References

  • Aarts, H., Gollwitzer, P., and Hassin, R. 2004. Goal contagion: Perceiving is for pursuing. Journal of Personality and Social Psychology 87:23–37.

    Article  Google Scholar 

  • Bacharach, M. 2006. In Beyond individual choice, eds. N. Gold, and R. Sugden, Princeton, NJ: Princeton Univ. Press. http://press.princeton.edu/titles/8174.html.

    Google Scholar 

  • Baker, C.L., Saxe, R., and Tenenbaum, J.B. 2009. Action understanding as inverse planning. Cognition 113(3):329–349.

    Article  Google Scholar 

  • Bishop, C.M. 2006. Pattern recognition and machine learning. Springer.

  • Blakemore, S.-J., and Frith, C. 2005. The role of motor contagion in the prediction of action. Neuropsychologia 43(2):260–267.

    Article  Google Scholar 

  • Blakemore, S.-J., Wolpert, D.M., and Frith, C.D. 1998. Central cancellation of self-produced tickle sensation. Nature Neuroscience 1(7):635–640.

    Article  Google Scholar 

  • Botvinick, M.M., Braver, T.S., Barch, D.M., Carter, C.S., and Cohen, J.D. 2001. Conflict monitoring and cognitive control. Psychological Review 108(3):624–652.

    Article  Google Scholar 

  • Bratman, M. 1987. Intentions, plans, and practical reason. Harvard University Press.

  • Brown-Schmidt, S., Gunlogson, C., and Tanenhaus, M.K. 2008. Addressees distinguish shared from private information when interpreting questions during interactive conversation. Cognition 107(3):1122–1134.

    Article  Google Scholar 

  • Chartrand, T.L., and Bargh, J.A. 1999. The chameleon effect: the perception-behavior link and social interaction. Journal of Personality and Social Psychology 76(6):893–910.

    Article  Google Scholar 

  • Clark, H.H. 1996. Using language. Cambridge University Press.

  • Cuijpers, R.H., van Schie, H.T., Koppen, M., Erlhagen, W., and Bekkering, H. 2006. Goals and means in action observation: A computational approach. Neural Networks 19(3):311–322.

    Article  Google Scholar 

  • Demiris, Y., and Khadhouri, B. 2005. Hierarchical attentive multiple models for execution and recognition (hammer). Robotics and Autonomous Systems Journal 54:361–369.

    Article  Google Scholar 

  • Desmurget, M., and Grafton, S. 2000. Forward modeling allows feedback control for fast reaching movements. Trends in Cognitive Sciences 4:423–431.

    Article  Google Scholar 

  • Dindo, H., Zambuto, D., and Pezzulo, G. 2011. Motor simulation via coupled internal models using sequential monte carlo. In Proceedings of IJCAI 2011.

  • Ferguson, M.J., and Bargh, J.A. 2004. How social perception can automatically influence behavior. Trends in Cognitive Sciences, 8(1):33–39.

    Article  Google Scholar 

  • Frith, C.D., Blakemore, S.J., and Wolpert, D.M. 2000. Abnormalities in the awareness and control of action. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 355(1404):1771–1788.

    Article  Google Scholar 

  • Frith, C.D., and Frith, U. 2006. How we predict what other people are going to do. Brain Research 1079(1):36–46.

    Article  Google Scholar 

  • Frith, C.D., and Frith, U. 2008. Implicit and explicit processes in social cognition. Neuron 60(3):503–510.

    Article  Google Scholar 

  • Galantucci, B. 2009. Experimental semiotics: A new approach for studying communication as a form of joint action. Topics in Cognitive Science 1(2):393–410.

    Article  Google Scholar 

  • Gallese, V. 2009. Motor abstraction: A neuroscientific account of how action goals and intentions are mapped and understood. Psychological Researchs 73(4):486–498.

    Article  Google Scholar 

  • Garrod, S., and Pickering, M.J. 2009. Joint action, interactive alignment, and dialog. Topics in Cognitive Science 1(2):292–304.

    Article  Google Scholar 

  • Gergely, G., and Csibra, G. 2003. Teleological reasoning in infancy: the naive theory of rational action. Trends in Cognitive Sciences 7:287–292.

    Article  Google Scholar 

  • Grice, H.P. 1975. Logic and conversation. In Syntax and semantics, eds. P. Cole, and J.L. Morgan, vol. 3. New York: Academic Press.

    Google Scholar 

  • Grosz, B.J. and Sidner, C. 1990. Plans for discourse. In Intentions in communication, eds. P.R. Cohen, J. Morgan, and M.E. Pollack, MIT Press.

  • Grush, R. 2004. The emulation theory of representation: Motor control, imagery, and perception. Behavioral and Brain Sciences 27(3):377–96.

    Google Scholar 

  • Hamilton, A.F.d.C., and Grafton, S.T. 2007. The motor hierarchy: From kinematics to goals and intentions. In Sensorimotor foundations of higher cognition, eds. P. Haggard, Y. Rossetti, and M. Kawato, Oxford University Press.

  • Horton, W.S., and Keysar, B. 1996. Cognition 59(1):91–117.

    Article  Google Scholar 

  • Isenhower, R.W., Richardson, M.J., Carello, C., Baron, R.M., and Marsh, K.L. 2010. Affording cooperation: Embodied constraints, dynamics, and action-scaled invariance in joint lifting. Psychonomic Bulletin & Review 17(3):342–347.

    Article  Google Scholar 

  • Jeannerod, M. 2001. Neural simulation of action: A unifying mechanism for motor cognition. NeuroImage 14:S103–S109.

    Article  Google Scholar 

  • Jeannerod, M. 2006. Motor cognition. Oxford University Press.

  • Kaelbling, L.P., Littman, M., and Cassandra, A.R. 1998. Planning and acting in partially observable stochastic domains. Artificial Intelligence 101:99–134.

    Article  Google Scholar 

  • Kalman, R.E. 1960. A new approach to linear filtering and prediction problems. Journal of Basic Engineering 82(1):35–45.

    Google Scholar 

  • Kawato, M. 1999. Internal models for motor control and trajectory planning. Current Opinion in Neurobiology 9:718–27.

    Article  Google Scholar 

  • Kelso, J.A.S. 1995. Dynamic patterns: The self-organization of brain and behavior. MIT Press, Cambridge, Mass.

    Google Scholar 

  • Kilner, J., Paulignan, Y., and Blakemore, S. 2003. An interference effect of observed biological movement on action. Current Biology 13:522–525.

    Article  Google Scholar 

  • Kilner, J.M., Friston, K.J., and Frith, C.D. 2007. Predictive coding: An account of the mirror neuron system. Cognitive Processing 8(3):159–166.

    Article  Google Scholar 

  • Kirsh, D. 2010. Thinking with external representations. AI & Society 25(4):441–454.

    Article  Google Scholar 

  • Knoblich, G., and Sebanz, N. 2008. Evolving intentions for social interaction: From entrainment to joint action. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 363(1499):2021–2031.

    Article  Google Scholar 

  • Newman-Norlund, R.D., Bosga, J., Meulenbroek, R.G.J., and Bekkering, H. 2008. Anatomical substrates of cooperative joint-action in a continuous motor task: Virtual lifting and balancing. Neuroimage 41(1):169–177.

    Article  Google Scholar 

  • Newman-Norlund, R.D., van Schie, H.T., van Zuijlen, A.M.J., and Bekkering, H. 2007. The mirror neuron system is more active during complementary compared with imitative action. Nature Neuroscience 10(7):817–818.

    Article  Google Scholar 

  • Pacherie, E. 2008. The phenomenology of action: A conceptual framework. Cognition 107:179–217.

    Article  Google Scholar 

  • Pezzulo, G. 2008. Coordinating with the future: The anticipatory nature of representation. Minds and Machines 18(2):179–225.

    Article  Google Scholar 

  • Pezzulo, G. 2011. Grounding procedural and declarative knowledge in sensorimotor anticipation. Mind and Language 26(1):78–114.

    Article  Google Scholar 

  • Pezzulo, G., and Castelfranchi, C. 2007. The symbol detachment problem. Cognitive Processing 8(2):115–131.

    Article  Google Scholar 

  • Pezzulo, G., and Castelfranchi, C. 2009. Thinking as the control of imagination: A conceptual framework for goal-directed systems. Psychological Research 73(4):559–577.

    Article  Google Scholar 

  • Pezzulo, G., and Dindo, H. 2011. What should i do next? using shared representations to solve interaction problems. Experimental Brain Research. http://www.springerlink.com/content/v1626220237466x2/.

  • Pickering, M.J., and Garrod, S. 2004. Toward a mechanistic psychology of dialogue. Behavioral and Brain Sciences 27(2):169–90; discussion 190–226.

    Google Scholar 

  • Prinz, W. 1990. A common coding approach to perception and action. In Relationships between perception and action, eds. O. Neumann, and W. Prinz, 167–201. Berlin: Springer Verlag.

    Google Scholar 

  • Prinz, W. 1997. Perception and action planning. European Journal of Cognitive Psychology 9:129–154.

    Article  Google Scholar 

  • Rizzolatti, G., Camarda, R., Fogassi, L., Gentilucci, M., Luppino, G., and Matelli, M. 1988. Functional organization of inferior area 6 in the macaque monkey. ii. area f5 and the control of distal movements. Experimental Brain Research 71(3):491–507.

    Article  Google Scholar 

  • Rizzolatti, G., and Craighero, L. 2004. The mirror-neuron system. Annual Review of Neuroscience 27:169–192.

    Article  Google Scholar 

  • Searle, J. 1995. The Construction of social reality. New York: The Free Press.

    Google Scholar 

  • Sebanz, N., Bekkering, H., and Knoblich, G. 2006. Joint action: Bodies and minds moving together. Trends in Cognitive Science 10(2):70–76.

    Article  Google Scholar 

  • Sebanz, N., Knoblich, G., and Prinz, W. 2005. How two share a task: Corepresenting stimulus-response mappings. Journal of Experimental Psychology. Human Perception and Performance 31(6):1234–1246.

    Article  Google Scholar 

  • Stalnaker, R. 2002. Common ground. Linguistics and Philosophy 25:701–721.

    Article  Google Scholar 

  • Sudgen, R. 2003. The logic of team reasoning. Philosophical Explorations 16(3):165–181.

    Google Scholar 

  • Tomasello, M., Carpenter, M., Call, J., Behne, T., and Moll, H. 2005. Understanding and sharing intentions: The origins of cultural cognition. Behavioral and Brain Sciences 28(5):675–91; discussion 691–735.

    Google Scholar 

  • Tsai, J.C.-C., Sebanz, N., and Knoblich, G. 2011. The groop effect: groups mimic group actions. Cognition 118(1):135–140.

    Article  Google Scholar 

  • Umiltà, M.A., Kohler, E., Gallese, V., Fogassi, L., Fadiga, L., Keyers, C., and Rizzolatti, G. 2001. I know what you are doing. A neurophysiological study. Neuron 31(1):155–65.

    Article  Google Scholar 

  • van der Wel, R., Knoblich, G., and Sebanz, N. 2010. Let the force be with us: Dyads exploit haptic coupling for coordination. Journal of Experimental Psychology: Human Perception and Performance. http://www.ncbi.nlm.nih.gov/pubmed/21417545.

  • Vesper, C., Butterfill, S., Knoblich, G., and Sebanz, N. 2010. A minimal architecture for joint action. Neural Networks 23(8–9):998–1003.

    Article  Google Scholar 

  • Wilson, M., and Knoblich, G. 2005. The case for motor involvement in perceiving conspecifics. Psychological Bulletin 131:460–473.

    Article  Google Scholar 

  • Wolpert, D.M., Doya, K., and Kawato, M. 2003. A unifying computational framework for motor control and social interaction. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 358(1431):593–602.

    Article  Google Scholar 

  • Wolpert, D.M., Gharamani, Z., and Jordan, M. 1995. An internal model for sensorimotor integration. Science 269:1179–1182.

    Article  Google Scholar 

  • Yoshida, W., Dolan, R.J., and Friston, K.J. 2008. Game theory of mind. PLoS Computational Biology 4(12):e1000254+.

    Article  Google Scholar 

Download references

Acknowledgements

Research funded by the EU’s FP7 under grant agreement n. FP7-231453 (HUMANOBS). I want to thank Haris Dindo and David Kirsh for fruitful discussions, and two anonymous reviewers for helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giovanni Pezzulo.

Additional information

The research leading to these results has received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreements 231453 (HUMANOBS) and 270108 (Goal-Leaders).

Appendix A: The Interactive Strategy from a Computational Viewpoint

Appendix A: The Interactive Strategy from a Computational Viewpoint

From a computational perspective, we can model each agent as a generative (Bayesian) system in which hidden (i.e., not visible) cognitive variables, beliefs (B) and intentions (I), determine the selection of actions (A) and then motor primitives (MP); in turn, they determine the agent’s overt behavior, which becomes part of the world state (S). As the world state is only partially observable, we distinguish observables (O) from the full, unobservable state of the world S.Footnote 9 Note that, different from what shown in Fig. 4, we assume that beliefs are not only relative to the task, but are more generally contextual knowledge that influences intention and action selection.

1.1 A.1 Action Planning and Execution

For the sake of simplicity, we can assume that each action A executed at time t achieves a certain action goal at time t + n, or in other terms it determines a future goal state S t + n (in doing this, we skip the level of motor primitives MP that are involved in action performance). If we also assume that there is a way to map an agent’s intentions (I) into goal states S t + n (e.g., the intention of realizing a tower of red bricks can mapped in a state of the world in which there are 5 stacked red bricks), then planning consists in the choice of an action, or a sequence of actions, conditioned to the agent’s belief and intentions, which (is expected to) realize the future goal state S t + n (and typically, as a consequence, produce some reward, or even maximize reward), which can be done for instance with probabilistic planning methods (Bishop 2006).

In the passage from plans to action execution, however, the mapping from desired goal states to (sequences of) actions is typically ill-posed and difficult, and for this reason it has been proposed that the brain makes use of internal (inverse) models to solve it. In addition to that, during action execution internal (forward) models could be adopted as well to adapt the motor plan to the fast dynamics of the environment, in the cases in which feedback is too slow (Desmurget and Grafton 2000; Kawato 1999).

1.2 A.2 Prediction and State Estimation

As we have discussed, forward models serve (sensory and state) prediction during action execution. Formally, forward models permit to map state and action information into a (sensory or state) prediction at the next time step, or more than one time steps in the future (i.e., S, AS t + n ).

However, in dynamic environments, the effects of actions (S, AS t + n ) are not easy to determine, for many reasons: first, in general agents can only access the observable part of the environment (O) and not its “true” state (S); second, the environment changes over time (i.e., S t is different from S t + 1). Since actions can have different effects when executed in different contexts, and this can hinder the achievement of goals, a solution of this problem consists in estimating the true state of the environment (S) rather than acting based on O, and at the same time learning to predict the dynamics of the environment (S t S t + 1). In generative Bayesian systems, the probabilistic inference P(S|O) can be done, for instance, via iterative methods such as Kalman filtering (Kalman 1960) (other, more sophisticated methods are necessary for non-linear cases).

1.3 A.3 Action Prediction, Understanding and Mindreading

In interactive scenarios, the situation is even more complex, since the behavior of other agents is an extra source of dynamics. Again, forward models can be used to predict the interactive dynamics and to adapt to them. However, remind that agents have only access to the observable part of the environment (O), and this makes prediction (and hence action planning and coordination) difficult. In analogy with state estimation, prediction accuracy can be ameliorated by estimating the “true” state (S t ) behind its observable part (O). Now, the “true” state (S t ) is determined by both environmental dynamics (S t − 1S t ) and actions (A) of performed by the other agent(s). Therefore, in addition to modeling the environmental transitions (S t − 1S t ), it is also advantageous to model and predict the actions performed by the other agent(s). (Note that agents have at their disposal also information of what parts of the current state are produced by their own past actions, therefore can “cancel out” the self-produced part from the stimuli and the explicandum Blakemore et al. 1998; Frith et al. 2000.)

More in general, by noting (intentionally or unintentionally) that actions (A) executed by other agents are selected on the basis of its hidden cognitive variables, its beliefs (B) and intentions (I), an even more complete solution consists in inferring these cognitive variables, too. From a computational viewpoint, then, mindreading can be conceptualized as the estimation of the hidden (cognitive) variables of the other agent, rather than simply the observation and prediction of its overt behavior.

Computational methods for mindreading under this formulation have formal similarities with state estimation, but are more complex, because the generative architecture that generates the observables is richer. What makes mindreading easier is the assumption that the observed agent has a similar generative architecture as one’s own; indeed, constraining the structure of a generative model makes it easier to infer the value of its variables.

Under this formulation, mental state inference can be performed at different levels (e.g., estimation of actions, intentions or beliefs), can use whatever prior information available, such as for instance knowledge of the preference of an agent for certain actions and not others (i.e., P(A)), and whatever source of information available. Current implementations of mindreading range from inverse planning, which compares an agent’s actions against a normative, rational principle (Baker et al. 2009), to motor simulation (Demiris and Khadhouri 2005; Wolpert et al. 2003), which compares an agent’s actions with the putative effects of one’s own (derived by re-enacting one’s own motor system ‘in simulation’); see also Cuijpers et al. (2006).

Note that, in this process, motor contagion or the alignment of motor primitives in performer and observer agents is highly advantageous, in that another’s MP can be considered as “observations” and used to facilitate inference of the underlying beliefs, intentions and action goals that could have generated them. A second element that facilitates estimation is a consideration of what environmental affordances are available. Indeed, availability of affordances and other contextual information (e.g. where the interaction takes place) can be considered as factors that raise the prior probability P(A) of actions that are associated to given affordances and contexts. If this information becomes available to an observer, it can help in the inferential process.

1.4 A.4 Advantages of Using Shared Representations and the Interactive Strategy

Within our model, shared representations (SR) can be considered as the aligned subset of beliefs, intentions and actions of two (or more) agents, or in other terms the equivalence of a subset I agent1 with a subset of I agent2, a subset of B agent1 with a subset of B agent2, and a subset of A agent1 with a subset of A agent2. It is worth reminding that it is not necessary (and not even true) that all the cognitive variables are shared. This also entails that, if agents have meta-beliefs on what is shared, they can be different. First, only cognitive variables pertaining to task achievement are generally shared. Second, as shown in Fig. 3, representations can become shared during the interaction; an indeed from a computational viewpoint the guarantee of successful interactions is that (only) representations that afford coordinated patterns of action execution and prediction become aligned.

At the beginning of the interaction only some beliefs and intentions are shared, mainly because of past experience, reliance on a common situated context, and the recognition of social situations and agreements. During the course of interaction, both agents can perform actions that have as their goal changing SR rather than achieving goals defined in terms of external states of affairs. These are communicative actions that typically change another’s beliefs and intentions.

One advantage of using shared representations during action planning and observation (including planning and observation of communicative actions) is that they are a novel source of information, and one that is available to both agents, and reliable, since both agents put their effort in maintaining it and signaling when it has to be updated. By using shared representations, an agent can more easily predict and understand the actions of the other agents even without estimating the “true” state of the world or its “true” cognitive variables (i.e., perform P(A|SR) rather than P(A|O) or P(A|I, O)). In turn, this can be done either by appealing to a principle of rationality (i.e., what would be the best action given SR), similar to the method in Baker et al. (2009) (with the difference that not the inference is much more constrained than asking what is the best action in general), or by using forward search and motor simulation, similar to Demiris and Khadhouri (2005), Wolpert et al. (2003), and also by using SR as priors that bias the search.

Note also that, as suggested in Section 4.5, in order to implement the aforementioned form of planning it is not necessary to maintain a model of the other agent, or a model for each of the other agents, but only individual representations for action tied to task achievement, which can be used for both action planning and prediction of actions of the other agents.Footnote 10

Shared representations give advantages also in the planning of communicative actions. As we have highlighted, the standard way of expressing communicative intentions under this formulation consists in violating what would be predictable given SR. Here, again, the inference of what is surprising given SR is easy and does not require any complex inferential mechanism, since the necessary source of information (the SR) is already available to the planner agent. In turn, choosing when and what to communicate (or how much to share) is more complex; indeed, most systems inspired to the social-aware strategy implicitly assume that it is necessary to share a lot of information, such as for instance my own and your part of a common plan, my short-term and long-term intentions, etc.

However, as suggested in Section 4.5, communicative actions that change SR can be interpreted as a form of “teaching” for the other agents how to generate good predictions, or in other words communicating it what is the value of SR that is more accurate in predicting P(A|SR). This gives rise to a mutual form of supervised learning. For this form of learning to be efficacious, the “learning episode” (i.e., the message) should not be casual, but selected on purpose by the other agent to be on time and maximally informative.

In this context, being “on time” means that I should communicate when I know that your future inference P(A|SR) would be wrong, or in other terms I know that the common ground is not sufficient for you to do good predictions. This criterion is used to decide when it is necessary that I perform a communicative action, so as that my actions and intentions are not misunderstood. (In addition to that, occasionally I change the SR as part of my pragmatic actions, for instance adding a blue brick rather than a red one. In this case, however, the communicative implicature is automatic and no further planning is needed.)

To select “what” to communicate, instead, the criterion is informativeness, and consists in lowering uncertainty and entropy of the SR (which, in our example of Fig. 4, is equivalent to lowering uncertainty on B) and raising the probability that the next predictions of the other agent will be more accurate. This entails that not necessarily I have to share with you all that I believe or intend to do, but only create a good ground for you to predict my actions well, to detect violations of the SR, and ultimately to conduct a successful interaction. Any additional information should be better not shared, since sharing it has a cost in terms of the success of the interaction.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Pezzulo, G. Shared Representations as Coordination Tools for Interaction. Rev.Phil.Psych. 2, 303–333 (2011). https://doi.org/10.1007/s13164-011-0060-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13164-011-0060-5

Keywords

Navigation