ABSTRACT
As research over the last several decades has shown that non-verbal actions such as face and head movement play a crucial role in human interaction, such resources are also likely to play an important role in human-robot interaction. In developing a robotic system that employs embodied resources such as face and head movement, we cannot simply program the robot to move at random but rather we need to consider the ways these actions may be timed to specific points in the talk. This paper discusses our work in developing a museum guide robot that moves its head at interactionally significant points during its explanation of an exhibit. In order to proceed, we first examined the coordination of verbal and non-verbal actions in human guide-visitor interaction. Based on this analysis, we developed a robot that moves its head at interactionally significant points in its talk. We then conducted several experiments to examine human participant non-verbal responses to the robot's head and gaze turns. Our results show that participants are likely to display non-verbal actions, and do so with precision timing, when the robot turns its head and gaze at interactionally significant points than when the robot turns its head at not interactionally significant points. Based on these findings, we propose several suggestions for the design of a guide robot.
- ATR from http://www.irc.atr.jp/productRobovie/robovie-r2-e.htmlGoogle Scholar
- Bennewitz, M., Faber, F., Joho, D., Schreiber, M., and Behnke, S. Towards a humanoid museum guide robot that interacts with multiple persons. In Proc. Humanoids 2005 (2005), 418--423.Google Scholar
- Breazeal, C. Emotion and sociable humanoid robots. International Journal of Human-Computer Studies, 59 (2003), 119--155. Google ScholarDigital Library
- Goodwin, C. Conversational organization: interaction between speakers and hearers. (1981). New York: Academic Press.Google Scholar
- Goodwin, C. Action and embodiment within situated human interaction. Journal of Pragmatics, 32 (2000), 1489--1522.Google ScholarCross Ref
- Heath, C. Analyzing face to face interaction: video, the visual and material. In D. Silverman (Ed.), Qualitative research: Theory, method and practice (2004), 266--282. London: Sage.Google Scholar
- Kendon, A. Conducting Interaction: Patterns of Behavior in Focused Encounters (1990). Cambridge: Cambridge University Press.Google Scholar
- Lerner, G. Selecting next speaker: The context-sensitive operation of a context-free organization, Language in Society, 32 (2003), 177--201.Google ScholarCross Ref
- Intelligent Robotics and Communication Laboratories, http://www.irc.atr.jp/index.html.Google Scholar
- Jefferson, G. A case of precision timing in ordinary conversation: overlapped tag-positioned address terms in closing sequences, Semiotica, 9, 1 (1973), 47--96.Google ScholarCross Ref
- Kuno, Y., Hiroyuki, S, Tsubota, T, Moriyama, S, Yamazaki, K., and Yamazaki, A. Museum guide robot with communicative head motion. In Proc. ROMAN 06 (2006), 33--38.Google Scholar
- Kuno, Y., Sadazuka, K., Kawashima, M., Yamazaki, K., Yamazaki, A., and Kuzuoka, H. Museum guide robot based on sociological interaction analysis, In Proc. CHI 2007 (2007), 1191--1994. Google ScholarDigital Library
- Kuno, Y., Sadazuka, K., Kawashima, M., Tsuruta, S., Yamazaki, K., and Yamazaki, A. Effective head gestures for museum guide robot in interaction with humans, In Proc. ROMAN 07 (2007), CD-ROM.Google ScholarCross Ref
- Kuzuoka, H., Yamazaki, K., Yamazaki, A., Kosaka, J., Suga, Y., and Heath, C. Dual ecologies of robot as communication media: thoughts on coordinating orientations and projectability. In Proc. CHI 2004 (2004), 183--190. Google ScholarDigital Library
- Matsusaka, Y., Kubota, S., Tojo, T., Furukawa, K., and Kobayashi, T. Multi-person conversation robot using multi-modal interface, In Proc. SCI/ISAS (1999), 450--455.Google Scholar
- Miyauchi, D., Nakamura, A., and Kuno, Y. Bidirectional eye contact for human-robot communication. IEICE Trans. Inf. &. Syst. E88-D (2005), 2509--2516. Google ScholarDigital Library
- Nakano, Y. I., Reinstin, G., Stocky, T., and Cassell, J. Towards a model of face-to-face grounding (2003), In Proc. ACL 2003 (2003), 553--561. Google ScholarDigital Library
- Nourbakhsh, I., Kunz, C., and Willeke, T. The Mobot museum robot installations: a five year experiment. In Proc. IROS 2003 (2003), 3636--3641.Google ScholarCross Ref
- Sacks, H., Schegloff, E., and Jefferson, G. A simplest systematics for the organization of turn-taking in conversation, Language, 50 (1974), 696--735.Google ScholarCross Ref
- Schegloff, E. A. Body torque, Social Research, 65, 3 (1998), 535--596.Google Scholar
- Schegloff, E. A. and Sacks, H. Opening up closings, Semiotica, 8 (1973), 289--327.Google ScholarCross Ref
- Shiomi, M., Kanda, T., Ishiguro, H., and Hagita, N. Interactive humanoid robots for a science museum. In Proc. HRI 2006 (2006), 305--312. Google ScholarDigital Library
- Sidner, C. L., Kidd, C. D., Lee, C., and Lesh, N. Where to look: A study of human-robot engagement, In Proc. IUI '04 (2004),78--84. Google ScholarDigital Library
- Sidner, C.L., Lee, C., Kidd, C.D., and Rich, C. Explorations in engagement for humans and robots. Artificial Intelligence, 166 (2005), 140--164. Google ScholarDigital Library
- Sidner, C. L., Lee, C., Morency, L. P., and Forlines, C. The effect of head-nod recognition in human-robot conversation, In Proc. HRI 2006 (2006), 290--296. Google ScholarDigital Library
- Sigwart, R., Arras, K. O., Bouadballah, S., Burnier, D., Froidevaux, G., Greppin, X., Jensen B., Lorotte A., Mayor L., Meisser M., Philippsen R., Piguet R., Ramel G., Terrien G., and Tomatis N. Robox at Expo. 02: A large-scale installation of personal robots, Robotics and Automation System, 42 (2003), 203--222.Google ScholarCross Ref
- Thrun, S., Beetz, M., Bennewitz, M., Burgard, W., Cremers, A. B., Dellaert, F., Fox, D., Haehnel, D., Rosenberg, C., Schulte, J., and Schulz, D. Probabilistic algorithms and the interactive museum tour-guide robot Minerva, International Journal of Robotics Research, 19 (2000), 972--999.Google ScholarCross Ref
- Wang, E., Lignos, C., Vatsal, A., and Scassellati, B. Effects of head movement on perceptions of humanoid robot behavior, In Proc. HRI 2006 (2006),180--185. Google ScholarDigital Library
Index Terms
- Precision timing in human-robot interaction: coordination of head movement and utterance
Recommendations
Timing in human-robot interaction
HRI '14: Proceedings of the 2014 ACM/IEEE international conference on Human-robot interactionTiming plays a role in a range of human-robot interaction scenarios, as humans are highly sensitive to timing and interaction fluency. It is central to spoken dialogue, with turn-taking, interruptions, and hesitation influencing both task efficiency and ...
Lexical Entrainment in Multi-party Human–Robot Interaction
Social RoboticsAbstractThis paper reports lexical entrainment in a multi-party human–robot interaction, wherein one robot and two humans serve as participants. Humans tend to use the same terms as their interlocutors while making conversation. This phenomenon is called ...
Human-robot collaborative tutoring using multiparty multimodal spoken dialogue
HRI '14: Proceedings of the 2014 ACM/IEEE international conference on Human-robot interactionIn this paper, we describe a project that explores a novel experimental setup towards building a spoken, multi-modally rich, and human-like multiparty tutoring robot. A human-robot interaction setup is designed, and a human-human dialogue corpus is ...
Comments