Skip to main content

The Reliability of Scores from Affective Instruments

  • Chapter
  • First Online:
Instrument Development in the Affective Domain

Abstract

In psychometrics, reliability is used to describe the consistency of a measure. Reliability can be understood in the terms when describing its meaning in the instrument development process. The goal of reliability becomes the determination of how much variability in the instrument score is due to “measurement error” and how much is due to variability in “true score” of the respondent. This chapter approaches reliability from the classical test theory (CTT) model. Most importantly in this chapter, however, the focus is on the concept of correlation and how to index it for purpose of understanding the “reliability” or consistency of the instrument under the CTT framework. This chapter discusses a variety of ways that reliability can be understood and how each represent distinct quantifications of consistency. Through several examples, different types of reliability are illustrated along with “acceptable” levels of consistency that should be expected from their measurement of characteristics in the affective domain. One of the central limitations of true score theory is that only one type of error can be addressed at a time. This chapter provides brief introduction to the conceptual framework of Generalizability theory (G-theory), which provides one potential solution to this issue. The chapter concludes with a discussion that links validity and reliability, the two central concepts that frame all the work behind the development of instruments for measuring affective characteristics.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The symbol ! indicates “factorial.” For example, 4! = 4 × 3 × 2 × 1.

References

  • American Educational Research Association (AERA), American Psychological Association (APA), & National Council on Measurement in Education (NCME). (1999). The standards for educational and psychological testing. Washington, DC: American Educational Research Association.

    Google Scholar 

  • Brennan, R. L. (1992). Generalizability theory. Educational Measurement: Issues and Practice, 11(4), 27–34.

    Article  Google Scholar 

  • Brennan, R. L. (2001). Generalizability theory. New York, NY: Springer.

    Google Scholar 

  • Cardinet, J., Johnson, S., & Pini, G. (2010). Applying generalizability theory using EduG. New York, NY: Taylor and Francis.

    Google Scholar 

  • Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297–334.

    Article  Google Scholar 

  • DeVellis, R. F. (1991). Scale development: Theory and applications. Newbury Park, CA: Sage.

    Google Scholar 

  • Isaac, S., & Michael, W. B. (1981). Handbook in research and evaluation (2nd ed.). San Diego, CA: Edits publishers.

    Google Scholar 

  • Kerlinger, F. N. (1973). Foundations of Behavioral Research (2nd ed.). New York: Holt, Rinehart and Winston.

    Google Scholar 

  • Loehlin, J. C. (2004). Latent variable models: An introduction to factor, path and structural analysis (4th ed.). Hillsdale: Erlbaum.

    Google Scholar 

  • McCoach, D. B. (2002). A validity study of the School Attitude Assessment Survey (SAAS). Measurement and Evaluation in Counseling and Development, 35, 66–77.

    Google Scholar 

  • Meyer, P. (2010). Reliability: Understanding statistics measurement. New York: Oxford University Press.

    Book  Google Scholar 

  • Nunnally, J. C. (1978). Psychometric theory (2nd ed.). New York: McGraw-Hill.

    Google Scholar 

  • Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). New York: McGraw-Hill.

    Google Scholar 

  • Pett, M. A., Lackey, N. R., & Sullivan, J. J. (2003). Making sense of factor analysis: The use of factor analysis for instrument development in health care research. Thousand Oaks, CA: Sage.

    Google Scholar 

  • Raykov, T., & Marcoulides, G. A. (2011). Introduction to psychometric theory. New York: Taylor and Francis Group.

    Google Scholar 

  • Shavelson, R. J., & Webb, N. M. (1991). Generalizability theory. Newbury Park, CA: Sage.

    Google Scholar 

  • Shavelson, R. J., Webb, N. M., & Rowley, G. L. (1989). Generalizability theory. American Psychologist, 44(6), 922–932.

    Article  Google Scholar 

  • Stanley, J. C. (1971). Reliability. In R. L. Thorndike (Ed.), Educational measurement (2nd ed., pp. 356–442). Washington, DC: American Council on Education.

    Google Scholar 

  • Thompson, B. (2002). Score reliability: Contemporary thinking on reliability issues. Thousand Oaks, California: Sage Publications.

    Google Scholar 

  • Webb, N. M., Rowley, G. L., & Shavelson, R. J. (1988). Using generalizability theory in counseling and development. Measurement and Evaluation in Counseling and Development, 21, 81–90.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to D. Betsy McCoach .

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media New York

About this chapter

Cite this chapter

McCoach, D.B., Gable, R.K., Madura, J.P. (2013). The Reliability of Scores from Affective Instruments. In: Instrument Development in the Affective Domain. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-7135-6_7

Download citation

Publish with us

Policies and ethics