Skip to main content
Log in

Maintaining Content Validity in Computerized Adaptive Testing

  • Published:
Advances in Health Sciences Education Aims and scope Submit manuscript

Abstract

A major advantage of using computerized adaptive testing (CAT) is improved measurement efficiency; better score reliability or mastery decisions can result from targeting item selections to the abilities of examinees. However, this type of engineering solution can result in differential content for different examinees at various levels of ability. This paper empirically demonstrates some of the trade-offs which can occur when content balancing is imposed in CAT forms or conversely, when it is ignored. That is, the content validity of a CAT form can actually change across a score scale when content balancing is ignored. On the other hand, efficiency and score precision can be severely reduced by over specifying content restrictions in a CAT form. The results from two simulation studies are presented as a means of highlighting some of the trade-offs that could occur between content and statistical considerations in CAT form assembly.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Birnbaum, A. (1968). Some Latent Trait Models and Their Use in Inferring an Examinee's Ability. In Lord, F.M. & Novick,R. (eds.), Statistical Theories of Mental Test Scores, 397–422. Addison Welsey: Reading.

    Google Scholar 

  • Hambleton, R.K., Zaal, J.N. & Pieters, J.P.M. (1991). Computerized Adaptive Testing: Theory, Applications and Standards. In Hambleton, R.K. & Zaal, J.N. (eds.), Advances in Educational and Psychological Testing, 341–366. Kluwer Academic Publishers: Boston.

    Google Scholar 

  • Kane, M.T. (1982). A SamplingModel for Validity. Applied Psychological Measurement 6, 125–160.

    Google Scholar 

  • Kingsbury, G.G. & Zara, A.R. (1991). A Comparison of Procedures for Content-Sensitive Item Selection in Computerized Adaptive Tests. Applied Measurement in Education 4, 241–261.

    Article  Google Scholar 

  • Lord, F.M. (1980). Applications of Item Response Theory to Practical Testing Problems. Lawrence Erlbaum Associates: Hillsdale.

    Google Scholar 

  • Morrison, C.A. & Nungester, R.J. (1995, April). Computerized Adaptive Testing in a Medical Licensure Setting: A Comparison of Outcomes Under the One-and Two-Parameter Logistic Models. Paper presented at the meeting of the National Council on Measurement in Education, San Francisco, CA.

  • Thomasson, G.L. (1995, June). New Item Exposure Control Algorithms for Computerized Adaptive Testing. Paper presented at the meeting of the Psychometric Society, Minneapolis, MN.

  • Wainer, H. (1990). Computerized Adaptive Testing: A Primer. Lawrence Erlbaum Associates: Hillsdale.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Luecht, R.M., de Champlain, A. & Nungester, R.J. Maintaining Content Validity in Computerized Adaptive Testing. Adv Health Sci Educ Theory Pract 3, 29–41 (1998). https://doi.org/10.1023/A:1009789314011

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1009789314011

Navigation