skip to main content
10.1145/1357054.1357077acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Precision timing in human-robot interaction: coordination of head movement and utterance

Published:06 April 2008Publication History

ABSTRACT

As research over the last several decades has shown that non-verbal actions such as face and head movement play a crucial role in human interaction, such resources are also likely to play an important role in human-robot interaction. In developing a robotic system that employs embodied resources such as face and head movement, we cannot simply program the robot to move at random but rather we need to consider the ways these actions may be timed to specific points in the talk. This paper discusses our work in developing a museum guide robot that moves its head at interactionally significant points during its explanation of an exhibit. In order to proceed, we first examined the coordination of verbal and non-verbal actions in human guide-visitor interaction. Based on this analysis, we developed a robot that moves its head at interactionally significant points in its talk. We then conducted several experiments to examine human participant non-verbal responses to the robot's head and gaze turns. Our results show that participants are likely to display non-verbal actions, and do so with precision timing, when the robot turns its head and gaze at interactionally significant points than when the robot turns its head at not interactionally significant points. Based on these findings, we propose several suggestions for the design of a guide robot.

References

  1. ATR from http://www.irc.atr.jp/productRobovie/robovie-r2-e.htmlGoogle ScholarGoogle Scholar
  2. Bennewitz, M., Faber, F., Joho, D., Schreiber, M., and Behnke, S. Towards a humanoid museum guide robot that interacts with multiple persons. In Proc. Humanoids 2005 (2005), 418--423.Google ScholarGoogle Scholar
  3. Breazeal, C. Emotion and sociable humanoid robots. International Journal of Human-Computer Studies, 59 (2003), 119--155. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Goodwin, C. Conversational organization: interaction between speakers and hearers. (1981). New York: Academic Press.Google ScholarGoogle Scholar
  5. Goodwin, C. Action and embodiment within situated human interaction. Journal of Pragmatics, 32 (2000), 1489--1522.Google ScholarGoogle ScholarCross RefCross Ref
  6. Heath, C. Analyzing face to face interaction: video, the visual and material. In D. Silverman (Ed.), Qualitative research: Theory, method and practice (2004), 266--282. London: Sage.Google ScholarGoogle Scholar
  7. Kendon, A. Conducting Interaction: Patterns of Behavior in Focused Encounters (1990). Cambridge: Cambridge University Press.Google ScholarGoogle Scholar
  8. Lerner, G. Selecting next speaker: The context-sensitive operation of a context-free organization, Language in Society, 32 (2003), 177--201.Google ScholarGoogle ScholarCross RefCross Ref
  9. Intelligent Robotics and Communication Laboratories, http://www.irc.atr.jp/index.html.Google ScholarGoogle Scholar
  10. Jefferson, G. A case of precision timing in ordinary conversation: overlapped tag-positioned address terms in closing sequences, Semiotica, 9, 1 (1973), 47--96.Google ScholarGoogle ScholarCross RefCross Ref
  11. Kuno, Y., Hiroyuki, S, Tsubota, T, Moriyama, S, Yamazaki, K., and Yamazaki, A. Museum guide robot with communicative head motion. In Proc. ROMAN 06 (2006), 33--38.Google ScholarGoogle Scholar
  12. Kuno, Y., Sadazuka, K., Kawashima, M., Yamazaki, K., Yamazaki, A., and Kuzuoka, H. Museum guide robot based on sociological interaction analysis, In Proc. CHI 2007 (2007), 1191--1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Kuno, Y., Sadazuka, K., Kawashima, M., Tsuruta, S., Yamazaki, K., and Yamazaki, A. Effective head gestures for museum guide robot in interaction with humans, In Proc. ROMAN 07 (2007), CD-ROM.Google ScholarGoogle ScholarCross RefCross Ref
  14. Kuzuoka, H., Yamazaki, K., Yamazaki, A., Kosaka, J., Suga, Y., and Heath, C. Dual ecologies of robot as communication media: thoughts on coordinating orientations and projectability. In Proc. CHI 2004 (2004), 183--190. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Matsusaka, Y., Kubota, S., Tojo, T., Furukawa, K., and Kobayashi, T. Multi-person conversation robot using multi-modal interface, In Proc. SCI/ISAS (1999), 450--455.Google ScholarGoogle Scholar
  16. Miyauchi, D., Nakamura, A., and Kuno, Y. Bidirectional eye contact for human-robot communication. IEICE Trans. Inf. &. Syst. E88-D (2005), 2509--2516. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Nakano, Y. I., Reinstin, G., Stocky, T., and Cassell, J. Towards a model of face-to-face grounding (2003), In Proc. ACL 2003 (2003), 553--561. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Nourbakhsh, I., Kunz, C., and Willeke, T. The Mobot museum robot installations: a five year experiment. In Proc. IROS 2003 (2003), 3636--3641.Google ScholarGoogle ScholarCross RefCross Ref
  19. Sacks, H., Schegloff, E., and Jefferson, G. A simplest systematics for the organization of turn-taking in conversation, Language, 50 (1974), 696--735.Google ScholarGoogle ScholarCross RefCross Ref
  20. Schegloff, E. A. Body torque, Social Research, 65, 3 (1998), 535--596.Google ScholarGoogle Scholar
  21. Schegloff, E. A. and Sacks, H. Opening up closings, Semiotica, 8 (1973), 289--327.Google ScholarGoogle ScholarCross RefCross Ref
  22. Shiomi, M., Kanda, T., Ishiguro, H., and Hagita, N. Interactive humanoid robots for a science museum. In Proc. HRI 2006 (2006), 305--312. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Sidner, C. L., Kidd, C. D., Lee, C., and Lesh, N. Where to look: A study of human-robot engagement, In Proc. IUI '04 (2004),78--84. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Sidner, C.L., Lee, C., Kidd, C.D., and Rich, C. Explorations in engagement for humans and robots. Artificial Intelligence, 166 (2005), 140--164. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Sidner, C. L., Lee, C., Morency, L. P., and Forlines, C. The effect of head-nod recognition in human-robot conversation, In Proc. HRI 2006 (2006), 290--296. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Sigwart, R., Arras, K. O., Bouadballah, S., Burnier, D., Froidevaux, G., Greppin, X., Jensen B., Lorotte A., Mayor L., Meisser M., Philippsen R., Piguet R., Ramel G., Terrien G., and Tomatis N. Robox at Expo. 02: A large-scale installation of personal robots, Robotics and Automation System, 42 (2003), 203--222.Google ScholarGoogle ScholarCross RefCross Ref
  27. Thrun, S., Beetz, M., Bennewitz, M., Burgard, W., Cremers, A. B., Dellaert, F., Fox, D., Haehnel, D., Rosenberg, C., Schulte, J., and Schulz, D. Probabilistic algorithms and the interactive museum tour-guide robot Minerva, International Journal of Robotics Research, 19 (2000), 972--999.Google ScholarGoogle ScholarCross RefCross Ref
  28. Wang, E., Lignos, C., Vatsal, A., and Scassellati, B. Effects of head movement on perceptions of humanoid robot behavior, In Proc. HRI 2006 (2006),180--185. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Precision timing in human-robot interaction: coordination of head movement and utterance

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CHI '08: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
      April 2008
      1870 pages
      ISBN:9781605580111
      DOI:10.1145/1357054

      Copyright © 2008 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 6 April 2008

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      CHI '08 Paper Acceptance Rate157of714submissions,22%Overall Acceptance Rate6,199of26,314submissions,24%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader