skip to main content
article
Free Access

Using nonspeech sounds to provide navigation cues

Published:01 September 1998Publication History
Skip Abstract Section

Abstract

This article describes 3 experiments that investigate the possibiity of using structured nonspeech audio messages called earcons to provide navigational cues in a menu hierarchy. A hierarchy of 27 nodes and 4 levels was created with an earcon for each node. Rules were defined for the creation of hierarchical earcons at each node. Participants had to identify their location in the hierarchy by listening to an earcon. Results of the first experiment showed that participants could identify their location with 81.5% accuracy, indicating that earcons were a powerful method of communicating hierarchy information. One proposed use for such navigation cues is in telephone-based interfaces (TBIs) where navigation is a problem. The first experiment did not address the particular problems of earcons in TBIs such as “does the lower quality of sound over the telephone lower recall rates,” “can users remember earcons over a period of time.” and “what effect does training type have on recall?” An experiment was conducted and results showed that sound quality did lower the recall of earcons. However; redesign of the earcons overcame this problem with 73% recalled correctly. Participants could still recall earcons at this level after a week had passed. Training type also affected recall. With personal training participants recalled 73% of the earcons, but with purely textual training results were significantly lower. These results show that earcons can provide good navigation cues for TBIs. The final experiment used compound, rather than hierarchical earcons to represent the hierarchy from the first experiment. Results showed that with sounds constructed in this way participants could recall 97% of the earcons. These experiments have developed our general understanding of earcons. A hierarchy three times larger than any previously created was tested, and this was also the first test of the recall of earcons over time.

References

  1. ABOABA, A. 1996. Using sound in telephone-based interfaces. Master's Thesis. University of Glasgow, Glasgow, Scotland, UK.]]Google ScholarGoogle Scholar
  2. BADDELEY, A. 1990. Human Memory: Theory and Practice. Lawrence Erlbaum Associates, Inc., Mahwah, NJ.]]Google ScholarGoogle Scholar
  3. BARFIELD, W., ROSENBERG, C., AND LEVASSEUR, G. 1991. The use of icons, earcons and commands in the design of an online hierarchical menu. IEEE Trans. Prof. Commun. 34, 2, 101-108.]]Google ScholarGoogle ScholarCross RefCross Ref
  4. BAUMGART, D., JOHNSON, J., AND HELMSTETTER, E. 1990. Augmentative and Alternative Communication Systems for Persons with Moderate and Severe Disabilities. Paul Brooks Publishing Co., Baltimore, MD.]]Google ScholarGoogle Scholar
  5. BLATTNER, M., PAPP, A., AND GLINERT, E. 1992. Sonic enhancements of two-dimensional graphic displays. In Proceedings of the International Conference on Auditory Display (ICAD '92, Santa Fe, NM). Addison-Wesley, Reading, MA, 447-470.]]Google ScholarGoogle Scholar
  6. BLATTNER, M., SUMIKAWA, D., AND GREENBERG, R. 1989. Earcons and icons: Their structure and common design principles. Human-Comput. Interact. 4, 1, 11-44.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. BREWSTER, S.A. 1994. Providing a structured method for integrating non-speech audio into human-computer interfaces. Ph.D. Dissertation. University of York, York, UK.]]Google ScholarGoogle Scholar
  8. BREWSTER, S. A., RATY, V.-P., AND KORTEKANGAS, A. 1996. Earcons as a method of providing navigational cues in a menu hierarchy. In Proceedings of the British Computer Society Conference on Human Computer Interaction (HCI '96, London, UK). British Computer Society, Swinton, UK, 169-183.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. BREWSTER, S. A., WRIGHT, P. C., AND EDWARDS, A. D. N. 1992. A detailed investigation into the effectiveness of earcons. In Proceedings of the International Conference on Auditory Display (ICAD '92, Santa Fe, NM). Addison-Wesley, Reading, MA, 471-498.]]Google ScholarGoogle Scholar
  10. BREWSTER, S. A., WRIGHT, P. C., AND EDWARDS, A. D. N. 1993. An evaluation of earcons for use in auditory human-computer interfaces. In Proceedings of the Conference on Human Factors in Computing (INTERCHI '93, Amsterdam, The Netherlands, Apr. 24-29), S. Ashlund, A. Henderson, E. Hollnagel, K. Mullet, and T. White, Eds. ACM Press, New York, NY, 222-227.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. BREWSTER, S. A., WRIGHT, P. C., AND EDWARDS, A. D. N. 1994. The design and evaluation of an auditory-enhanced scrollbar. In Proceedings of the Conference on Human Factors in Computing Systems: "Celebrating Interdependence" (CHI '94, Boston, MA, Apr. 24-28), B. Adelson, S. Dumais, and J. Olson, Eds. ACM Press, New York, NY, 173-179.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. BREWSTER, S. A., WRIGHT, P. C., AND EDWARDS, A. D. N. 1995a. Experimentally derived guidelines for the creation of earcons. In Adjunct Proceedings of the British Computer Society Conference on Human Computer Interaction (HCI '95, Huddersfield, UK). British Computer Society, Swinton, UK, 155-159.]]Google ScholarGoogle Scholar
  13. BREWSTER, S., WRIGHT, P. C., AND EDWARDS, A. D. N. 1995b. Parallel earcons: Reducing the length of audio messages. Int. J. Hum.-Comput. Stud. 43, 2 (Aug.), 153-175.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. GAVER, W. 1989. The SonicFinder: An interface that uses auditory icons. Human-Comput. Interact. 4, 1, 67-94.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. GAVER, W. W., SMITH, R. B., AND O'SHEA, T. 1991. Effective sounds in complex systems: The ARKOLA simulation. In Proceedings of the Conference on Human Factors in Computing Systems: Reaching through Technology (CHI '91, New Orleans, LA, Apr. 27-May 2), S. P. Robertson, G. M. Olson, and J. S. Olson, Eds. ACM Press, New York, NY, 85-90.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. MAQUIRE, M. 1996. A human-factors study of telephone developments and convergence. Contemp. Ergon., 446-451.]]Google ScholarGoogle Scholar
  17. ROSSON, M. B. 1985. Using synthetic speech for remote access to information. Behav. Res. Methods Instr. Comput. 17, 2, 250-252.]]Google ScholarGoogle ScholarCross RefCross Ref
  18. SCHUMACHER, R. M., HARDZINSKI, M. L., AND SCHWARTZ, A. L. 1995. Increasing the usability of interactive voice response systems. Hum. Factors 37, 2, 251-264.]]Google ScholarGoogle ScholarCross RefCross Ref
  19. SLOWIACZEK, L. M. AND NUSBAUM, a. C. 1985. Effects of speech rate and pitch contour on the perception of synthetic speech. Hum. Factors 27, 6, 701-712.]]Google ScholarGoogle ScholarCross RefCross Ref
  20. STEVENS, R. 1996. Principles for the design of auditory interfaces to present complex information to blind people. Ph.D. Dissertation. University of York, York, UK.]]Google ScholarGoogle Scholar
  21. STEVENS, R. D., BREWSTER, S. A., WRIGHT, P. C., AND EDWARDS, A. D. N. 1994. Providing an audio glance at algebra for blind researchers. In Proceedings of the International Conference on Auditory Display (ICAD '94, Santa Fe, NM). 21-30.]]Google ScholarGoogle Scholar
  22. STIFELMAN, L. J., ARONS, B., SCHMANDT, C., AND HULTEEN, E.A. 1993. VoiceNotes: A speech interface for a hand-held voice notetaker. In Proceedings of the Conference on Human Factors in Computing (INTERCHI '93, Amsterdam, The Netherlands, Apr. 24-29), S. Ashlund, A. Henderson, E. Hollnagel, K. Mullet, and T. White, Eds. ACM Press, New York, NY, 179-186.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. SUMIKAWA, D.A. 1985. Guidelines for the integration of audio cues into computer user interfaces. Tech. Rep. UCRL 53656. Lawrence Livermore National Laboratory, Livermore, CA.]]Google ScholarGoogle Scholar
  24. SUMIKAWA, D., BLATINER, M., JOY, K., AND GREENBERG, R. 1986. Guidelines for the syntactic design of audio cues in computer interfaces. Tech. Rep. UCRL 92925. Lawrence Livermore National Laboratory, Livermore, CA.]]Google ScholarGoogle Scholar
  25. WOLF, C., KOVED, L., AND KUZINGER, E. 1995. Ubiquitous mail: Speech and graphical interfaces to an integrated voice/email mailbox. In Proceedings of the 3rd IFIP Conference on Human-Computer Interaction (INTERACT '95, Lillehammer, Norway). Chapman & Hall, Ltd., London, UK, 247-252.]]Google ScholarGoogle Scholar
  26. YANKELOVICH, N., LEVOW, G.-A., AND MARX, M. 1995. Designing SpeechActs: Issues in speech user interfaces. In Proceedings of the Conference on Human Factors in Computing Systems (CHI '95, Denver, CO, May 7-11), I. R. Katz, R. Mack, L. Marks, M. B. Rosson, and J. Nielsen, Eds. ACM Press/Addison-Wesley Publ. Co., New York, NY, 369-376.]] Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Using nonspeech sounds to provide navigation cues

                Recommendations

                Reviews

                Martha Elizabeth Crosby

                Advances in the understanding of how people use computers have led to the greatest innovations in software development. Brewster describes three experiments that show how structured nonverbal audio messages called “earcons” can provide navigational cues in a nonverbal user interface. Earcons are abstract musical tones that use repetition, variation, and contrast of qualities such as timbre, register, intensity, pitch, and rhythm in structured combinations to create sound messages. Experiment 1 used a file system hierarchy of 25 nodes on 3 levels. The participants correctly recalled 81.5 percent of the 14 earcons they heard, showing the viability of an aural interface. Experiment 2 was designed to generalize these results by addressing the influence of sound quality, method of training, and time on the percentage of earcons recalled. Brewster reports that participants could still recall the earcons after a week, but that the quality of sound and the type of training influenced how well they recalled them. Experiment 3 used compound rather than hierarchical earcons to represent the structure from experiment 1. Earcons were created that did not require the participants to remember more than seven rules. The new design improved the number of earcons recalled from 81.5 percent to 97 percent. This type of earcon design has the advantage of creating arbitrarily sized hierarchies, and participants do not need to be retrained for each new structure's size and shape. This paper provides interface designers with valuable information. It will be particularly valuable to those who are responsible for designing complex displays to clearly present large amounts of changing data in ways compatible with users' information needs. In additio n to developers of applications that use telephone-based interfaces, mentioned by Brewster, this work should be of particular interest to designers of multimodal computing environments.

                Access critical reviews of Computing literature here

                Become a reviewer for Computing Reviews.

                Comments

                Login options

                Check if you have access through your login credentials or your institution to get full access on this article.

                Sign in

                Full Access

                • Published in

                  cover image ACM Transactions on Computer-Human Interaction
                  ACM Transactions on Computer-Human Interaction  Volume 5, Issue 3
                  Sept. 1998
                  118 pages
                  ISSN:1073-0516
                  EISSN:1557-7325
                  DOI:10.1145/292834
                  Issue’s Table of Contents

                  Copyright © 1998 ACM

                  Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

                  Publisher

                  Association for Computing Machinery

                  New York, NY, United States

                  Publication History

                  • Published: 1 September 1998
                  Published in tochi Volume 5, Issue 3

                  Permissions

                  Request permissions about this article.

                  Request Permissions

                  Check for updates

                  Qualifiers

                  • article

                PDF Format

                View or Download as a PDF file.

                PDF

                eReader

                View online with eReader.

                eReader