Skip to main content
Log in

An Introduction to Item Response Theory for Patient-Reported Outcome Measurement

  • Practical Application
  • Published:
The Patient - Patient-Centered Outcomes Research Aims and scope Submit manuscript

Abstract

The growing emphasis on patient-centered care has accelerated the demand for high-quality data from patient-reported outcome (PRO) measures. Traditionally, the development and validation of these measures has been guided by classical test theory. However, item response theory (IRT), an alternate measurement framework, offers promise for addressing practical measurement problems found in health-related research that have been difficult to solve through classical methods. This paper introduces foundational concepts in IRT, as well as commonly used models and their assumptions. Existing data on a combined sample (n = 636) of Korean American and Vietnamese American adults who responded to the High Blood Pressure Health Literacy Scale and the Patient Health Questionnaire-9 are used to exemplify typical applications of IRT. These examples illustrate how IRT can be used to improve the development, refinement, and evaluation of PRO measures. Greater use of methods based on this framework can increase the accuracy and efficiency with which PROs are measured.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Brook RH, Ware JE, Jr., Davies-Avery A, Stewart AL, Donald CA, Rogers WH, et al. Overview of adult health measures fielded in Rand’s health insurance study. Med Care. 1979;17(7 Suppl):iii–x, 1–131.

    Google Scholar 

  2. Willke RJ, Burke LB, Erickson P. Measuring treatment impact: a review of patient-reported outcomes and other efficacy endpoints in approved product labels. Control Clin Trials. 2004;25(6):535–52. doi:10.1016/j.cct.2004.09.003.

    Article  PubMed  Google Scholar 

  3. Darzi L. High quality care for all: NHS Next Stage Review final report. 2008. Contract No.: ISBN 978-0-10-174322-8.

  4. Selby JV. The patient-centered outcomes research institute: a 2013 agenda for “research done differently”. Popul Health Manag. 2013;16(2):69–70. doi:10.1089/pop.2013.1621.

    Article  PubMed  Google Scholar 

  5. Speight J, Barendse SM. FDA guidance on patient reported outcomes. BMJ. 2010;340:c2921. doi:10.1136/bmj.c2921bmj.c2921.

    Article  PubMed  Google Scholar 

  6. Gulliksen H. Theory of mental tests. New York: Wiley; 1950.

    Book  Google Scholar 

  7. Hambleton RK. Emergence of item response modeling in instrument development and data analysis. Med Care. 2000;38(9 Suppl):II60–5.

    CAS  PubMed  Google Scholar 

  8. Nunnally JC. Psychometric theory. New York: McGraw Hill; 1967.

    Google Scholar 

  9. Embretson SE. The new rules of measurement. Psychol Assess. 1996;8(4):341–9.

    Article  Google Scholar 

  10. Hambleton RK, Jones RW. Comparison of classical test theory and item response theory and their applications to test development. Instructional Topics in Educational Measurement. 1993. p. 38–47.

  11. Hambleton RK, Swaminathan H, Rogers WH. Fundamentals of item response theory. Newbury Park: Sage Publications; 1991.

    Google Scholar 

  12. Brennan RL, editor. Educational measurement. 4th ed. Westport: Praeger Publishers; 2006.

    Google Scholar 

  13. van der Linden WJ, Hambleton RK. Handbook of modern item response theory. New York: Springer; 1997.

    Book  Google Scholar 

  14. Holland PW, Wainer H. Differential item functioning. Hillsdale: Lawrence Erlbaum Associates; 1993.

    Google Scholar 

  15. Reeve BB. An introduction to modern measurement theory. National Cancer Institute. 2002.

  16. Baker F. The basis of item response theory. 2nd ed. College Park: ERIC Clearinghouse on Assessment and Evaluation; 2001.

    Google Scholar 

  17. Lord FM. The relation of test score to the trait underlying the test. Educ Psychol Meas. 1953;13:517–48.

    Article  Google Scholar 

  18. Birnbaum A. Part 5: some latent trait models and their use in inferring an examinee’s ability. In: Lord FM, Novick MR, editors. Statistical theories of mental test scores. Reading: Addison-Wesley; 1968.

    Google Scholar 

  19. Rasch G. Probabilistic models for some intelligence and attainment tests. Chicago: MESA; 1960.

    Google Scholar 

  20. Reeve BB, Fayers P. Applying item response theory modeling for evaluating questionnaire item and scale properties. In: Fayers P, Hays RD, editors. Assessing quality of life in clinical trials: methods of practice. 2nd ed. Oxford: Oxford University Press; 2005. p. 55–73.

  21. Embretson SE, Reise SP. Item response theory for psychologists. Mahwah: Lawrence Erlbaum Associates; 2000.

    Google Scholar 

  22. Samejima F. Estimation of latent ability using a response pattern of graded scores. Psychom Monogr. 1969;34(17 Suppl):386–415.

    Google Scholar 

  23. Andrich D. A rating formulation for ordered response categories. Psychometrika. 1978;43:561–73.

    Article  Google Scholar 

  24. Masters GN. A Rasch model for partial credit scoring. Psychometrika. 1982;47:149–74.

    Article  Google Scholar 

  25. Muraki E. A generalized partial credit model: application of an EM algorithm. Appl Psychol Meas. 1992;17:159–76.

    Article  Google Scholar 

  26. Bock RD. Estimating item parameters and latent ability when responses are scored in two or more nominal categories. Psychometrika. 1972;37:29–51.

    Article  Google Scholar 

  27. Reckase M. Unifactor latent trait models applied to multifactor tests: results and implications. J Educ Stat. 1979;4:207–30.

    Article  Google Scholar 

  28. Hattie J. Methodology review: assessing unidimensionality of tests and items. Appl Psychol Meas. 1985;9:139–64.

    Article  Google Scholar 

  29. Stout W. A nonparametric approach for assessing latent trait unidimensionality. Psychometrika. 1987;52:589–617.

    Article  Google Scholar 

  30. Gessaroli M, DeChamplain A. Using an approximate Chi-square statistic to test the number of dimensions underlying the responses to a set of items. J Educ Meas. 1996;33:157–79.

    Article  Google Scholar 

  31. Reise SP. Item response theory and its applications for cancer outcomes measurement. In: Lipscomb J, Gotay CC, Snyder C, editors. Outcomes assessment in cancer: measures, methods, and applications. Cambridge: Cambridge University Press; 2004. p. 425–44.

    Chapter  Google Scholar 

  32. Smith AB, Rush R, Fallowfield LJ, Velikova G, Sharpe M. Rasch fit statistics and sample size considerations for polytomous data. BMC Med Res Methodol. 2008;8:33. doi:10.1186/1471-2288-8-33.

    Article  PubMed Central  PubMed  Google Scholar 

  33. Smith RM, Plackner C. The family approach to assessing fit in Rasch measurement. J Appl Meas. 2009;10(4):424–37.

    PubMed  Google Scholar 

  34. Bond TG, Fox CM. Applying the Rasch model: fundamental measurement in the human sciences. Hillsdale: Lawrence Erlbaum Baum Associates; 2001.

    Google Scholar 

  35. Wright BD, Mead J. BICAL: calibrating items and scales with the Rasch model (Research Memorandum No. 23). Chicago: University of Chicago, Department of Education, Statistical Laboratory; 1977.

  36. Orlando M, Thissen D. Likelihood-based item-fit indices for dichotomous item response theory models. Appl Psychol Meas. 2000;24(1):50–64.

    Article  Google Scholar 

  37. McLeod LD, Swygert KA, Thissen D. Factor analysis for items scored in two categories. In: Thissen D, Wainer H, editors. Test scoring. Mahwah: Lawrence Earlbaum & Associates; 2001.

    Google Scholar 

  38. Haley SM, McHorney CA, Ware JE Jr. Evaluation of the MOS SF-36 physical functioning scale (PF-10): I. Unidimensionality and reproducibility of the Rasch item scale. J Clin Epidemiol. 1994;47(6):671–84 (pii: 0895-4356(94)90215-1).

    Article  CAS  PubMed  Google Scholar 

  39. Edelen MO, Reeve BB. Applying item response theory (IRT) modeling to questionnaire development, evaluation, and refinement. Qual Life Res. 2007;16(Suppl 1):5–18. doi:10.1007/s11136-007-9198-0.

    Article  PubMed  Google Scholar 

  40. Looveer J, Mulligan J. The efficacy of link items in the construction of a numeracy achievement scale—from kindergarten to year 6. J Appl Meas. 2009;10:247–65.

    PubMed  Google Scholar 

  41. Linacre JM. Sample size and item calibration stability. Rasch Meas Trans. 1994;7(4):328.

    Google Scholar 

  42. Tsutakawa RK, Johnson JC. The effect of uncertainty of item parameter estimation on ability estimates. Psychometrika. 1990;55:371–90.

    Article  Google Scholar 

  43. Orlando M, Marshall GN. Differential item functioning in a Spanish translation of the PTSD checklist: detection and evaluation of impact. Psychol Assess. 2002;14(1):50–9.

    Article  PubMed  Google Scholar 

  44. Thissen D, Steinberg L, Gerrard M. Beyond group mean differences: the concept of item bias. Psychol Bull. 1986;99(1):118–28.

    Article  Google Scholar 

  45. Kim MT, Song HJ, Han HR, Song Y, Nam S, Nguyen TH, et al. Development and validation of the high blood pressure-focused health literacy scale. Patient Educ Couns. 2012;87(2):165–70. doi:10.1016/j.pec.2011.09.005.

    Article  PubMed Central  PubMed  Google Scholar 

  46. Spitzer RL, Kroenke K, Williams JB. Validation and utility of a self-report version of PRIME-MD: the PHQ primary care study. Primary Care Evaluation of Mental Disorders. Patient Health Questionnaire. JAMA. 1999;282(18):1737–44 (pii: joc90770).

    Article  CAS  PubMed  Google Scholar 

  47. Kroenke K, Spitzer RL, Williams JB. The PHQ-9: validity of a brief depression severity measure. J Gen Intern Med. 2001;16(9):606–13 (pii: jgi01114).

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  48. Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc. 1995;57:289–300.

    Google Scholar 

  49. Chen WH, Thissen D. Local dependance indices for item pairs using item response theory. J Educ Behav Stat. 1997;22:265–89.

    Article  Google Scholar 

  50. Stucki G, Daltroy L, Katz JN, Johannesson M, Liang MH. Interpretation of change scores in ordinal clinical scales and health status measures: the whole may not equal the sum of the parts. J Clin Epidemiol. 1996;49(7):711–7 (pii: 0895-4356(96)00016-9).

    Article  CAS  PubMed  Google Scholar 

  51. Ware JE, Bjorner JB, Kosinski M. Practical implications of item response theory and computerized adaptive testing: a brief summary of ongoing studies of widely used headache impact scales. Med Care. 2000;38(9 Suppl):II73–82.

    PubMed  Google Scholar 

  52. Cella D, Nowinski C, Peterman A, Victorson D, Miller D, Lai JS, et al. The neurology quality-of-life measurement initiative. Arch Phys Med Rehabil. 2011;92(10 Suppl):S28–36. doi:10.1016/j.apmr.2011.01.025.

    Article  PubMed Central  PubMed  Google Scholar 

  53. Cella D, Yount S, Rothrock N, Gershon R, Cook K, Reeve B, et al. The Patient-Reported Outcomes Measurement Information System (PROMIS): progress of an NIH Roadmap cooperative group during its first two years. Med Care. 2007;45(5 Suppl 1):S3–11. doi:10.1097/01.mlr.0000258615.42478.55.

    Article  PubMed Central  PubMed  Google Scholar 

  54. Cella D, Riley W, Stone A, Rothrock N, Reeve B, Yount S, et al. The Patient-Reported Outcomes Measurement Information System (PROMIS) developed and tested its first wave of adult self-reported health outcome item banks: 2005–2008. J Clin Epidemiol. 2010;63(11):1179–94. doi:10.1016/j.jclinepi.2010.04.011.

    Article  PubMed Central  PubMed  Google Scholar 

  55. Salsman JM, Victorson D, Choi SW, Peterman AH, Heinemann AW, Nowinski C, et al. Development and validation of the positive affect and well-being scale for the neurology quality of life (Neuro-QOL) measurement system. Qual Life Res. 2013. doi:10.1007/s11136-013-0382-0.

  56. Muraki E, Bock RD. PARSCALE 4 for windows: IRT based test scoring and item analysis for graded items and rating scales [Computer software]. Skokie: Scientific Software International, Inc.; 2003.

    Google Scholar 

  57. Thissen D, Chen WH, Bock RD. MULTILOG 7 for windows: multiple-category item analysis and test scoring using item response theory [Computer software]. Skokie: Scientific Software International, Inc.; 2003.

    Google Scholar 

  58. Muthén LK, Muthén BO. Mplus user’s guide. Los Angeles: Muthén & Muthén; 2011.

    Google Scholar 

  59. Cai L, Thissen D, du Toit S. IRTPRO 2.1 for Windows: Item response theory for patient-reported outcomes [Computer software]. Lincolnwood: Scientific Software International, Inc.; 2011.

  60. Zimowski MF, Muraki E, Mislevy RJ, Bock RD. BILOG-MG 3 for windows: multiple-group IRT analysis and test maintenance for binary items [Computer software]. Skokie: Scientific Software International, Inc; 2003.

    Google Scholar 

  61. Houts CR, Cai L. flexMIRT version 1.88: a numerical engine for multilevel item factor analysis and test scoring [Computer software]. Seattle: Vector Psychometric Group; 2012.

  62. RUMM Laboratory Pty Ltd. RUMM2030 [Computer software]. Perth: RUMM Laboratory Pty Ltd; 2012.

    Google Scholar 

  63. Linacre JM. Winsteps version 3.80.0 [Computer Software]. Beaverton: Winsteps.com; 2013.

  64. StataCorp. Stata Statistical Software: Release 13. College Station: StataCorp LP; 2013.

  65. Rizopoulos D. ltm: an R package for latent variable modelling and item response theory analyses. J Stat Softw. 2006;17:1–25.

    Article  Google Scholar 

  66. Mair P, Hatzinger R, Maier MJ. eRm: extended rasch modeling. R package version 0.15-1. 2012. http://CRAN.R-project.org/package=eRm.

  67. Childs RA, Chen WH. Obtaining comparable item parameter estimates in MULTILOG and PARSCALE for two polytomous IRT models. Appl Psychol Meas. 1999;23:371–9.

    Article  Google Scholar 

  68. Paek I, Han KT. IRTPRO 2.1 for windows (item response theory for patient-reported outcomes). Appl Psychol Meas. 2013;37(3):242–52.

    Article  Google Scholar 

Download references

Acknowledgments

None of the authors have any conflicts of interest, perceived or real, relevant to this paper. This work was not externally funded. THN and KSC conceived, designed, and wrote the manuscript. THN conducted all data analyses. HRH and MTK provided the datasets used in the analyses. They also reviewed and provided substantial feedback on earlier drafts. All authors reviewed and approved the final draft of this manuscript. KSC will act as the overall guarantor of this article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kitty S. Chan.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Nguyen, T.H., Han, HR., Kim, M.T. et al. An Introduction to Item Response Theory for Patient-Reported Outcome Measurement. Patient 7, 23–35 (2014). https://doi.org/10.1007/s40271-013-0041-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40271-013-0041-0

Keywords

Navigation