Skip to main content

Innovative Items for Computerized Testing

  • Chapter
  • First Online:
Elements of Adaptive Testing

Abstract

As computer-based testing (CBT) becomes a dominant, if not the dominant,medium for delivering assessments, interest in the potential of innovative items has grown. Innovative items are those that make use of features and functions of the computer to deliver assessments that do things not easily done in traditional paper-and-pencil assessments.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • AICPA. (2004, February). AICPA, NASBA, and Prometric successfully pilot computer-based exam for CPAs. Retrieved April 9, 2005, from http://www.aicpa.org/download/news/2004_02_02.pdf.

  • Bejar, I. I. (1991). A methodology for scoring open-ended architectural design problems. Journal of Applied Psychology, 76, 522–532.

    Article  Google Scholar 

  • Bennett, R. E. & Bejar, I. I. (1998). Validity and automated scoring: It’s not only the scoring. Educational Measurement: Issues & Practice, 17, 9–17.

    Article  Google Scholar 

  • Bennett, R. E., Goodman, M., Hessinger, J., Ligget, J., Marshall, G., Kahn, H. & Zack, J. (1997). Using multimedia in large-scale computer-based testing programs (Research Report No. RR-97-3). Princeton, NJ: Educational Testing Service.

    Google Scholar 

  • Bennett, R. E., Morley, M. & Quardt, D. (1998, April). Three response types for broadening the conception of mathematical problem solving in computerized-adaptive tests. Paper presented at the annual meeting of the National Council of Measurement in Education, San Diego.

    Google Scholar 

  • Braun, H. (1994). Assessing technology in assessment. In E. A. Baker & H. F. O’Neil (Eds.), Technology assessment in education and training (pp. 231–246). Hillsdale, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Braun, H., Bejar, I. I. & Williamson, D. M. (2006). Rule-based methods for automated scoring: Application in a licensing context. In D. M. Williamson, I. I. Bejar & R. J. Mislevy (Eds.), Automated scoring of complex tasks in computer-based testing (pp. 83–122). Mahwah, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Chandler, L., Zimmerman, L., Castro, J. & Way, W. D. (2006). Using technology to create innovative state science assessments: Pilots and policy. Presentation at the Council of Chief State School Officers Annual Conference on Large-Scale Assessment, San Francisco, CA.

    Google Scholar 

  • Drasgow, F., Olson-Buchanan, J. B. & Moberg, P. J. (1999). Development of an interactive video assessment: Trials and tribulations. In F. Drasgow & J. B. Olson-Buchanan, (Eds.), Innovations in computerized assessment. (pp 177–196). Mahwah, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Fitch, W. T. & Kramer, G. (1994). Sonifying the Body Electric: Superiority of an auditory over a visual display in a complex, multivariate system. In G. Kramer (Ed.), Auditory Display, (pp. 307–325). Reading, MA: Addison-Wesley.

    Google Scholar 

  • Harmes, J. C. & Parshall, C. G. (2000). An iterative process for computerized test development: Integrating usability methods. Paper presented at the annual meeting of the Florida Educational Research Association, Tallahassee, FL.

    Google Scholar 

  • Harmes, J. C. & Parshall, C. G. (2005). Situated tasks and simulated environments: A look into the future for innovative computerized assessment. Paper presented at the annual meeting of the Florida Educational Research Association. Miami.

    Google Scholar 

  • Harmes, J. C., Parshall, C. G., Rendina-Gobioff, G., Jones, P. K., Githens, M. & Dennard, A. (2004, November). Integrating usability methods into the CBT development process: Case study of a technology literacy assessment. Paper presented at the annual meeting of the Florida Educational Research Association, Tampa, FL.

    Google Scholar 

  • Huff, K. L & Sireci, S. G. (2001, Fall). Validity issues in computer-based testing. Educational Measurement: Issues and Practice, 20, 16–25.

    Article  Google Scholar 

  • Koch, D. A. (1993). Testing goes graphical. Journal of Interactive Instruction Development, 5, 14–21.

    Google Scholar 

  • Luecht, R. M. & Clauser, B. E. (2002). Test models for complex computer-based testing. In C. N. Mills, M. T. Potenza, J. J. Fremer & W. C. Ward (Eds.), Computer-based testing: Building the foundation for future assessments (pp. 89–102). Mahwah, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Margolis, M. J. & Clauser, B. E. (2006). A regression-based procedure for automated scoring of a complex medical performance assessment. In D. M. Williamson, I. I. Bejar & R. J. Mislevy (Eds.), Automated scoring of complex tasks in computer-based testing (pp. 123–168). Mahwah, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Melnick, D. E. & Clauser, B. E (2006). Computer based testing for professional licensing and certification of health professionals. In D. Bartram & R. K. Hambleton (Eds.), Computer-based testing and the Internet: Issues and advances (pp. 163–186). West Sussex, England: Wiley.

    Google Scholar 

  • Mislevy, R. J., Steinberg, L. S., Breyer, F. J., Almond, R. G. & Johnson, L. (2002). Making sense of data from complex assessments. Applied Measurement in Education, 15, 363–389.

    Article  Google Scholar 

  • National Board of Medical Examiners. (2004, Fall/Winter). Continuing developments in computer-based testing. NBME Examiner. Retrieved May 9, 2005, from http://www. nbme.org/Examiners/fallwinter2004/news2.asp.

  • National Council of Architectural Registration Boards. (2004). ARE Guidelines 3.0. Retrieved April 9, 2005, from http://www.ncarb.org/are/Areguide.html.

  • National Council of State Boards of Nursing. (2005). Fast facts about alternate item formats and the NCLEX examination. Retrieved April 20, 2005, from http://www.ncsbn.org/pdfs/01_08_04_Alt_Itm.pdf.

  • Olson-Buchanan, J. B., Drasgow, F., Moberg, P. J., Mead, A. D., P. A. Keenan & M. A. Donovan (1998). Interactive video assessment of conflict resolution skills. Personnel Psychology, 51, 1–24.

    Article  Google Scholar 

  • O’Neill, K. & Folk, V. (1996, April). Innovative CBT item formats in a teacher licensing program. Paper presented at the annual meeting of the National Council on Measurement in Education, New York.

    Google Scholar 

  • Parshall, C. G. (1999, February). Audio CBTs: Measuring more through the use of speech and non-speech sound. Paper presented at the annual meeting of the National Council on Measurement in Education, Montreal, Canada.

    Google Scholar 

  • Parshall, C. G. & Balizet, S. (2001). Audio computer-based tests (CBTs): An initial framework for the use of sound in computerized tests. Educational Measurement: Issues and Practice, 20, 5–15.

    Article  Google Scholar 

  • Parshall, C. G., Davey, T. & Pashley, P. (2000). Innovative item types for computerized testing. In W. J. van der Linden & C. A. W. Glas (Eds.), Computerized adaptive testing: Theory and practice. (pp. 129–148). Boston: Kluwer-Nijhof Publishing.

    Google Scholar 

  • Parshall, C. G., Spray, J. A., Kalohn, J. C. & Davey, T. (2002). Practical considerations in computer-based testing. New York: Springer-Verlag.

    MATH  Google Scholar 

  • Parshall, C. G., Stewart, R & Ritter, J. (1996, April). Innovations: Sound, graphics, and alternative response modes. Paper presented at the annual meeting of the National Council on Measurement in Education, New York.

    Google Scholar 

  • Scalise, K. & Gifford, B. (2006). Computer-based assessment in e-learning: A framework for constructing ”Intermediate Constraint” questions and tasks for technology platforms. Journal of Technology, Learning, and Assessment, 4. Retrieved January 29, 2007, from http://www.jtla.org.

  • Shea, J. A., Norcini, J. J., Baranowski, R. A., Langdon, L. O. & Popp, R. L. (1992). A comparison of video and print formats in the assessment of skill in interpreting cardiovascular motion studies. Evaluation and the Health Professions, 15, 325–340.

    Article  Google Scholar 

  • Shermis, M. D. & Burstein, J. (Eds.), (2003). Automated essay scoring: A cross-disciplinary perspective. Hillsdale, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Sireci, S. G. & Zenisky, A. L. (2006). Innovative item formats in computer-based testing: In pursuit of improved construct representations. In S. M. Downing & T. M. Haladyna, (Eds.), Handbook of test development (pp. 329–347). Mahwah, NJ: Lawrence Earlbaum Associates.

    Google Scholar 

  • van der Linden, W. J. (2002). On complexity in CBTs. In C. N. Mills, M. T. Potenza, J. J. Fremer & W. C. Ward (Eds.), Computer-based testing: Building the foundation for future assessments (pp. 89-102). Mahwah, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Vicino, F. L. & Moreno, K. E. (1997). Human factors in the CAT system: A pilot study. In W. A. Sands, B. K. Waters & J. R. McBride (Eds.), Computerized adaptive testing: From inquiry to operation (pp. 157–160). Washington, DC: APA.

    Chapter  Google Scholar 

  • Vispoel, W. P., Wang, T. & Bleiler, T. (1997). Computerized adaptive and fixed-item testing of music listening skill: A comparison of efficiency, precision, and concurrent validity. Journal of Educational Measurement, 34, 43–63.

    Article  Google Scholar 

  • Williamson, D. M., Bejar, I. I. & Hone, A. S. (1999). “Mental model” comparison of automated and human scoring. Journal of Educational Measurement, 36 158–184.

    Article  Google Scholar 

  • Williamson, D. M., Mislevy, R. J. & Bejar, I. I. (Eds.), (2006). Automated scoring of complex tasks in computer-based testing. Mahwah, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Zenisky, A. L. & Sireci, S. G. (2002). Technological innovations in large-scale assessment. Applied Measurement in Education, 15, 337–362.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer Science+Business Media, LLC

About this chapter

Cite this chapter

Parshall, C.G., Harmes, J.C., Davey, T., Pashley, P.J. (2009). Innovative Items for Computerized Testing. In: van der Linden, W., Glas, C. (eds) Elements of Adaptive Testing. Statistics for Social and Behavioral Sciences. Springer, New York, NY. https://doi.org/10.1007/978-0-387-85461-8_11

Download citation

Publish with us

Policies and ethics