A survey of socially interactive robots

https://doi.org/10.1016/S0921-8890(02)00372-XGet rights and content

Abstract

This paper reviews “socially interactive robots”: robots for which social human–robot interaction is important. We begin by discussing the context for socially interactive robots, emphasizing the relationship to other research fields and the different forms of “social robots”. We then present a taxonomy of design methods and system components used to build socially interactive robots. Finally, we describe the impact of these robots on humans and discuss open issues. An expanded version of this paper, which contains a survey and taxonomy of current applications, is available as a technical report [T. Fong, I. Nourbakhsh, K. Dautenhahn, A survey of socially interactive robots: concepts, design and applications, Technical Report No. CMU-RI-TR-02-29, Robotics Institute, Carnegie Mellon University, 2002].

Introduction

From the beginning of biologically inspired robots, researchers have been fascinated by the possibility of interaction between a robot and its environment, and by the possibility of robots interacting with each other. Fig. 1 shows the robotic tortoises built by Walter in the late 1940s [73]. By means of headlamps attached to the robot’s front and positive phototaxis, the two robots interacted in a seemingly “social” manner, even though there was no explicit communication or mutual recognition.

As the field of artificial life emerged, researchers began applying principles such as stigmergy (indirect communication between individuals via modifications made to the shared environment) to achieve “collective” or “swarm” robot behavior. Stigmergy was first described by Grassé to explain how social insect societies can collectively produce complex behavior patterns and physical structures, even if each individual appears to work alone [16].

Deneubourg and his collaborators pioneered the first experiments on stigmergy in simulated and physical “ant-like robots” [10], [53] in the early 1990s. Since then, numerous researchers have developed robot collectives [88], [106] and have used robots as models for studying social insect behavior [87].

Similar principles can be found in multi-robot or distributed robotic systems research [101]. Some of the interaction mechanisms employed are communication [6], interference [68], and aggressive competition [159]. Common to these group-oriented social robots is maximizing benefit (e.g., task performance) through collective action (Fig. 2, Fig. 3, Fig. 4).

The research described thus far uses principles of self-organization and behavior inspired by social insect societies. Such societies are anonymous, homogeneous groups in which individuals do not matter. This type of “social behavior” has proven to be an attractive model for robotics, particularly because it enables groups of relatively simple robots perform difficult tasks (e.g., soccer playing).

However, many species of mammals (including humans, birds, and other animals) often form individualized societies. Individualized societies differ from anonymous societies because the individual matters. Although individuals may live in groups, they form relationships and social networks, they create alliances, and they often adhere to societal norms and conventions [38] (Fig. 5).

In [44], Dautenhahn and Billard proposed the following definition:

Social robots are embodied agents that are part of a heterogeneous group: a society of robots or humans. They are able to recognize each other and engage in social interactions, they possess histories (perceive and interpret the world in terms of their own experience), and they explicitly communicate with and learn from each other.

Developing such “individual social” robots requires the use of models and techniques different from “group social” collective robots (Fig. 6). In particular, social learning and imitation, gesture and natural language communication, emotion, and recognition of interaction partners are all important factors. Moreover, most research in this area has focused on the application of “benign” social behavior. Thus, social robots are usually designed as assistants, companions, or pets, in addition to the more traditional role of servants.

Robots in individualized societies exhibit a wide range of social behavior, regardless if the society contains other social robots, humans, or both. In [19], Breazeal defines four classes of social robots in terms of: (1) how well the robot can support the social model that is ascribed to it and (2) the complexity of the interaction scenario that can be supported as follows.

Socially evocative. Robots that rely on the human tendency to anthropomorphize and capitalize on feelings evoked when humans nurture, care, or involved with their “creation”.

Social interface. Robots that provide a “natural” interface by employing human-like social cues and communication modalities. Social behavior is only modeled at the interface, which usually results in shallow models of social cognition.

Socially receptive. Robots that are socially passive but that can benefit from interaction (e.g. learning skills by imitation). Deeper models of human social competencies are required than with social interface robots.

Sociable. Robots that pro-actively engage with humans in order to satisfy internal social aims (drives, emotions, etc.). These robots require deep models of social cognition.

Complementary to this list we can add the following three classes:

Socially situated. Robots that are surrounded by a social environment that they perceive and react to [48]. Socially situated robots must be able to distinguish between other social agents and various objects in the environment.1

Socially embedded. Robots that are: (a) situated in a social environment and interact with other agents and humans; (b) structurally coupled with their social environment; and (c) at least partially aware of human interactional structures (e.g., turn-taking) [48].

Socially intelligent. Robots that show aspects of human style social intelligence, based on deep models of human cognition and social competence [38], [40].

For the purposes of this paper, we use the term “socially interactive robots” to describe robots for which social interaction plays a key role. We do this, not to introduce another class of social robot, but rather to distinguish these robots from other robots that involve “conventional” human–robot interaction, such as those used in teleoperation scenarios.

In this paper, we focus on peer-to-peer human–robot interaction. Specifically, we describe robots that exhibit the following “human social” characteristics:

  • express and/or perceive emotions;

  • communicate with high-level dialogue;

  • learn/recognize models of other agents;

  • establish/maintain social relationships;

  • use natural cues (gaze, gestures, etc.);

  • exhibit distinctive personality and character;

  • may learn/develop social competencies.

Socially interactive robots can be used for a variety of purposes: as research platforms, as toys, as educational tools, or as therapeutic aids. The common, underlying assumption is that humans prefer to interact with machines in the same way that they interact with other people. A survey and taxonomy of current applications is given in [60].

Socially interactive robots operate as partners, peers or assistants, which means that they need to exhibit a certain degree of adaptability and flexibility to drive the interaction with a wide range of humans. Socially interactive robots can have different shapes and functions, ranging from robots whose sole purpose and only task is to engage people in social interactions (Kismet, Cog, etc.) to robots that are engineered to adhere to social norms in order to fulfill a range of tasks in human-inhabited environments (Pearl, Sage, etc.) [18], [117], [127], [140].

Some socially interactive robots use deep models of human interaction and pro-actively encourage social interaction. Others show their social competence only in reaction to human behavior, relying on humans to attribute mental states and emotions to the robot [39], [45], [55], [125]. Regardless of function, building a socially interactive robot requires considering the human in the loop: as designer, as observer, and as interaction partner.

Socially interactive robots are important for domains in which robots must exhibit peer-to-peer interaction skills, either because such skills are required for solving specific tasks, or because the primary function of the robot is to interact socially with people. A discussion of application domains, design spaces, and desirable social skills for robots is given in [42], [43].

One area where social interaction is desirable is that of “robot as persuasive machine” [58], i.e., the robot is used to change the behavior, feelings or attitudes of humans. This is the case when robots mediate human–human interaction, as in autism therapy [162]. Another area is “robot as avatar” [123], in which the robot functions as a representation of, or representative for, the human. For example, if a robot is used for remote communication, it may need to act socially in order to effectively convey information.

In certain scenarios, it may be desirable for a robot to develop its interaction skills over time. For example, a pet robot that accompanies a child through his childhood may need to improve its skills in order to maintain the child’s interest. Learned development of social (and other) skills is a primary concern of epigenetic robotics [44], [169].

Some researchers design socially interactive robots simply to study embodied models of social behavior. For this use, the challenge is to build robots that have an intrinsic notion of sociality, that develop social skills and bond with people, and that can show empathy and true understanding. At present, such robots remain a distant goal [39], [44], the achievement of which will require contributions from other research areas such as artificial life, developmental psychology and sociology [133].

Although socially interactive robots have already been used with success, much work remains to increase their effectiveness. For example, in order for socially interactive robots to be accepted as “natural” interaction partners, they need more sophisticated social skills, such as the ability to recognize social context and convention.

Additionally, socially interactive robots will eventually need to support a wide range of users: different genders, different cultural and social backgrounds, different ages, etc. In many current applications, social robots engage only in short-term interaction (e.g., a museum tour) and can afford to treat all humans in the same manner. But, as soon as a robot becomes part of a person’s life, that robot will need to be able to treat him as a distinct individual [40].

In the following, we closely examine the concepts raised in this introductory section. We begin by describing different design methods. Then, we present a taxonomy of system components, focusing on the design issues unique to socially interactive robots. We conclude by discussing open issues and core challenges.

Section snippets

Design approaches

Humans are experts in social interaction. Thus, if technology adheres to human social expectations, people will find the interaction enjoyable, feeling empowered and competent [130]. Many researchers, therefore, explore the design space of anthropomorphic (or zoomorphic) robots, trying to endow their creations with characteristics of intentional agents. For this reason, more and more robots are being equipped with faces, speech recognition, lip-reading skills, and other features and capacities

Human perception of social robots

A key difference between conventional and socially interactive robots is that the way in which a human perceives a robot establishes expectations that guide his interaction with it. This perception, especially of the robot’s intelligence, autonomy, and capabilities is influenced by numerous factors, both intrinsic and extrinsic.

Clearly, the human’s preconceptions, knowledge, and prior exposure to the robot (or similar robots) have a strong influence. Additionally, aspects of the robot’s design

Acknowledgements

We would like to thank the participants of the Robot as Partner: An Exploration of Social Robots workshop (2002 IEEE International Conference on Intelligent Robots and Systems) for inspiring this paper. We would also like to thank Cynthia Breazeal, Lola Cañamero, and Sara Kiesler for their insightful comments. This work was partially supported by EPSRC grant (GR/M62648).

Terrence Fong is a joint postdoctoral fellow at Carnegie Mellon University (CMU) and the Swiss Federal Institute of Technology/Lausanne (EPFL). He received his Ph.D. (2001) in Robotics from CMU. From 1990 to 1994, he worked at the NASA Ames Research Center, where he was co-investigator for virtual environment telerobotic field experiments. His research interests include human–robot interaction, PDA and web-based interfaces, and field mobile robots.

References (169)

  • B. Adams, C. Breazeal, R.A. Brooks, B. Scassellati, Humanoid robots: a new kind of tool, IEEE Intelligent Systems 15...
  • J. Aggarwal, Q. Cai, Human motion analysis: A review, Computer Vision and Image Understanding 73 (3) (1999)...
  • P. Andry, P. Gaussier, S. Moga, J.P. Banquet, Learning and communication via imitation: An autonomous robot...
  • R. Arkin, M. Fujita, T. Takagi, R. Hasekawa, An ethological and emotional basis for human–robot interaction, Robotics...
  • C. Armon-Jones, The social functions of emotions, in: R. Harré (Ed.), The Social Construction of Emotions, Basil...
  • T. Balch, R. Arkin, Communication in reactive multiagent robotic systems, Autonomous Robots 1...
  • S. Baron-Cohen, Mindblindness: An Essay on Autism and Theory of Mind, MIT Press, Cambridge, MA,...
  • C. Bartneck, M. Okada, Robotic user interfaces, in: Proceedings of the Human and Computer Conference,...
  • C. Bartneck, eMuu—an emotional embodied character for the ambient intelligent home, Ph.D. Thesis, Technical University...
  • R. Beckers, et al., From local actions to global tasks: Stigmergy and collective robotics, in: Proceedings of...
  • A. Billard, Robota: clever toy and educational tool, Robotics and Autonomous Systems 42 (2003) 259–269 (this...
  • A. Billard, K. Dautenhahn, Grounding communication in situated, social robots, in: Proceedings Towards Intelligent...
  • A. Billard, K. Dautenhahn, Grounding communication in autonomous robots: An experimental study, Robotics and Autonomous...
  • A. Billard, K. Dautenhahn, Experiments in learning by imitation: Grounding and use of communication in robotic agents,...
  • A. Billard, G. Hayes, Learning to communicate through imitation in autonomous robots, in: Proceedings of the...
  • E. Bonabeau, M. Dorigo, G. Theraulaz, Swarm Intelligence: From Natural to Artificial Systems, Oxford University Press,...
  • C. Breazeal, A motivational system for regulating human–robot interaction, in: Proceedings of the National Conference...
  • C. Breazeal, Designing Sociable Robots, MIT Press, Cambridge, MA,...
  • C. Breazeal, Toward sociable robots, Robotics and Autonomous Systems 42 (2003) 167–175 (this...
  • C. Breazeal, Designing sociable robots: Lessons learned, in: K. Dautenhahn, et al. (Eds.), Socially Intelligent Agents:...
  • C. Breazeal, P. Fitzpatrick, That certain look: Social amplification of animate vision, in: Proceedings of the AAAI...
  • C. Breazeal, B. Scassellati, How to build robots that make friends and influence people, in: Proceedings of the...
  • C. Breazeal, B. Scassellati, A context-dependent attention system for a social robot, in: Proceedings of the...
  • C. Breazeal, B. Scassellati, Challenges in building robots that imitate people, in: K. Dautenhahn, C. Nehaniv (Eds.),...
  • C. Breazeal, A. Edsinger, P. Fitzpatrick, B. Scassellati, Active vision systems for sociable robots, IEEE Transactions...
  • A. Bruce, I. Nourbakhsh, R. Simmons, The role of expressiveness and attention in human–robot interaction, in:...
  • K. Bumby, K. Dautenhahn, Investigating children’s attitudes towards robots: A case study, in: Proceedings of the...
  • J. Cahn, The generation of affect in synthesized speech, Journal of American Voice I/O Society 8 (1990)...
  • L. Cañamero, Modeling motivations and emotions as a basis for intelligent behavior, in: W. Johnson (Ed.), Proceedings...
  • L. Cañamero, J. Fredslund, I show you how I like you—can you read it in my face? IEEE Transactions on Systems, Man and...
  • L. Cañamero (Ed.), Emotional and Intelligent II: The Tangled Knot of Social Cognition, Technical Report No. FS-01-02,...
  • J. Cassell, Nudge, nudge, wink, wink: Elements of face-to-face conversation for embodied conversational agents, in: J....
  • J. Cassell, et al. (Eds.), Embodied Conversational Agents, MIT Press, Cambridge, MA,...
  • R. Chellappa, et al., Human and machine recognition of faces: A survey, Proceedings of the IEEE 83 (5)...
  • M. Coulson, Expressing emotion through body movement: A component process approach, in: R. Aylett, L. Cañamero (Eds.),...
  • J. Crowley, Vision for man–machine interaction, Robotics and Autonomous Systems 19 (1997)...
  • C. Darwin, The Expression of Emotions in Man and Animals, Oxford University Press, Oxford,...
  • K. Dautenhahn, Getting to know each other—artificial social intelligence for autonomous robots, Robotics and Autonomous...
  • K. Dautenhahn, I could be you—the phenomenological dimension of social understanding, Cybernetics and Systems Journal...
  • K. Dautenhahn, The art of designing socially intelligent agents—science, fiction, and the human in the loop, Applied...
  • K. Dautenhahn, Socially intelligent agents and the primate social brain—towards a science of social minds, in:...
  • K. Dautenhahn, Roles and functions of robots in human society—implications from research in autism therapy, Robotica,...
  • K. Dautenhahn, Design spaces and niche spaces of believable social robots, in: Proceedings of the International...
  • K. Dautenhahn, A. Billard, Bringing up robots or—the psychology of socially intelligent robots: From theory to...
  • K. Dautenhahn, C. Nehaniv, Living with socially intelligent agents: A cognitive technology view, in: K. Dautenhahn...
  • K. Dautenhahn, C. Nehaniv (Eds.), Imitation in Animals and Artifacts, MIT Press, Cambridge, MA,...
  • K. Dautenhahn, I. Werry, A quantitative technique for analysing robot–human interactions, in: Proceedings of the...
  • K. Dautenhahn, B. Ogden, T. Quick, From embodied to socially embedded agents—implications for interaction-aware robots,...
  • K. Dautenhahn, I. Werry, J. Rae, P. Dickerson, Robotic playmates: Analysing interactive competencies of children with...
  • D. Dennett, The Intentional Stance, MIT Press, Cambridge, MA,...
  • Cited by (0)

    Terrence Fong is a joint postdoctoral fellow at Carnegie Mellon University (CMU) and the Swiss Federal Institute of Technology/Lausanne (EPFL). He received his Ph.D. (2001) in Robotics from CMU. From 1990 to 1994, he worked at the NASA Ames Research Center, where he was co-investigator for virtual environment telerobotic field experiments. His research interests include human–robot interaction, PDA and web-based interfaces, and field mobile robots.

    Illah Nourbakhsh is an Assistant Professor of Robotics at Carnegie Mellon University (CMU) and is co-founder of the Toy Robots Initiative at The Robotics Institute. He received his Ph.D. (1996) degree in computer science from Stanford. He is a founder and chief scientist of Blue Pumpkin Software, Inc. and Mobot, Inc. His current research projects include robot learning, believable robot personality, visual navigation and robot locomotion.

    Kerstin Dautenhahn is a Reader in artificial intelligence in the Computer Science Department at University of Hertfordshire, where she also serves as coordinator of the Adaptive Systems Research Group. She received her doctoral degree in natural sciences from the University of Bielefeld. Her research lies in the areas of socially intelligent agents and HCI, including virtual and robotic agents. She has served as guest editor for numerous special journal issues in AI, cybernetics, artificial life, and recently co-edited the book “Socially Intelligent Agents—Creating Relationships with Computers and Robots”.

    View full text