Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 6800))

Abstract

The paper presents an analysis of the multimodal interface model for the socially dependent people. The general requirements for the interface were to be as simple as possible and as natural as possible (in principle such interface should be the theoretical replacement of a typical “standard” one finger “joystick” control). Performed experiments allowed us to detect the most often used commands, the expected accuracy level for the selected applications and perform various usability tests.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Noyes, J.M.: Enhancing mobility through speech recognition technology. IEE Developments in Personal Systems, 4/1–4/3 (1995)

    Google Scholar 

  2. Pieraccini, M., Huerta, R.J.: Where do we go from here? Research and Commercial Spoken Dialog Systems. In: Proc. of 6th SIGdial Workshop on Discourse and Dialog, Lisbon, Portugal, pp. 1–10 (2005)

    Google Scholar 

  3. Acomb, K., et al.: Technical Support Dialog Systems, Issues, Problems, and Solutions. In: Proc. of the Workshop on Bridging the Gap: Academic and Industrial Research in Dialog Technologies, Rochester, New York, pp. 25–31 (2007)

    Google Scholar 

  4. Paek, T., Pieraccini, R.: Automating spoken dialogue management design using machine learning: An industry perspective. Speech Communication, Special Issue on Evaluating New Methods and Models for Advanced Speech-Based Interactive Systems 50(8-9), 716–729 (2008)

    Google Scholar 

  5. Valles, M., et al.: Multimodal environmental control system for elderly and disabled people. In: Proc. of Engineering in Medicine and Biology Society, Amsterdam, vol. 2, pp. 516–517 (1996)

    Google Scholar 

  6. Perry, M., et al.: Multimodal and ubiquitous computing systems: supporting independent-living older users. IEEE Transactions on Information Technology in Biomedicine 8(3), 258–270 (2004)

    Article  MathSciNet  Google Scholar 

  7. Wai, A.A.P., et al.: Situation-Aware Patient Monitoring in and around the Bed Using Multimodal Sensing Intelligence. In: Proc. of Intelligent Environments, Kuala Lampur, pp. 128–133 (2010)

    Google Scholar 

  8. Ishikawa, S.Y., et al.: Speech-activated text retrieval system for multimodal cellular phones. In: Proc. of Acoustics, Speech, and Signal Processing, vol. 1, pp. I-453–I-456 (2004)

    Google Scholar 

  9. Verstockt, S., et al.: Assistive smartphone for people with special needs: The Personal Social Assistant. In: Proc. of Human System Interactions, Catania, pp. 331–337 (2009)

    Google Scholar 

  10. Oviatt, S.: User-centered modeling for spoken language and multimodal interfaces. IEEE Multimedia 3(4), 26–35 (1996)

    Article  Google Scholar 

  11. Deng, L., et al.: A speech-centric perspective for human-computer interface. In: Proc. of Multimedia Signal Processing 2002, pp. 263–267 (2002)

    Google Scholar 

  12. Zhao, Y.: Speech-recognition technology in health care and special-needs assistance (Life Sciences). Signal Processing Magazine 26(3), 87–90 (2009)

    Article  Google Scholar 

  13. Sherwani, J., et al.: Speech vs. touch-tone: Telephony interfaces for information access by low literate users. In: Proceedings of Information and Communication Technologies and Development, Doha, pp. 447–457 (2009)

    Google Scholar 

  14. Motiwalla, L.F.: Jialun Qin. Enhancing Mobile Learning Using Speech Recognition Technologies: A Case Study. In: Management of eBusiness 2007, Toronto, pp. 18–25 (2007)

    Google Scholar 

  15. Sherwani, J., et al.: HealthLine: Speech-based Access to Health Information by Low-literate Users. In: Proc. of Information and Communication Technologies and Development, Bangalore, pp. 1–9 (2007)

    Google Scholar 

  16. Maskeliunas, R.: Modeling Aspects of Multimodal Lithuanian Human - Machine Interface. In: Esposito, A., Hussain, A., Marinaro, M., Martone, R. (eds.) Multimodal Signals, COST Seminar 2008. LNCS (LNAI), vol. 5398, pp. 75–82. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Maskeliunas, R., Rudzionis, V. (2011). Multimodal Interface Model for Socially Dependent People. In: Esposito, A., Vinciarelli, A., Vicsi, K., Pelachaud, C., Nijholt, A. (eds) Analysis of Verbal and Nonverbal Communication and Enactment. The Processing Issues. Lecture Notes in Computer Science, vol 6800. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-25775-9_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-25775-9_11

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-25774-2

  • Online ISBN: 978-3-642-25775-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics