Skip to main content
Editorial

Validity

Challenges in Conception, Methods, and Interpretation in Survey Research

Published Online:https://doi.org/10.1027/1614-2241/a000159
Free first page

References

  • American Educational Research Association; American Psychological Association; National Council on Measurement in Education (AERA, APA, & NCME). (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association. First citation in articleGoogle Scholar

  • Beierl, E., Heene, M. & Bühner, M. (2018). Low reliability as a threat to the assessment of factorial validity. Methodology, 14, 189–197. https://doi.org/10.1027/1614-2241/a000158 First citation in articleGoogle Scholar

  • Camargo, S., Herrera, A. N. & Traynor, A. (2018). Looking for a consensus in the discussion about the concept of validity - a Delphi study. Methodology, 14, 146–155. https://doi.org/10.1027/1614-2241/a000157 First citation in articleAbstractGoogle Scholar

  • Ercikan, K.Pellegrino, J. W. (Eds.). (2017). Validation of score meaning for the next generation of assessments: The use of response processes. New York, NY: Routledge. First citation in articleCrossrefGoogle Scholar

  • Gadermann, A. M., Chen, M., Emerson, S. & Zumbo, B.D. (2018). Examining validity evidence of self-report measures using Differential Item Functioning: An illustration of three methods. Methodology, 14, 165–176. https://doi.org/10.1027/1614-2241/a000156 First citation in articleAbstractGoogle Scholar

  • Hu, L. & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6, 1–55. https://doi.org/10.1080/10705519909540118 First citation in articleCrossrefGoogle Scholar

  • Hubley, A. M. (2018, June). Missed opportunities in testing and assessment: Response processes and test consequences. Keynote address at the International Congress on Applied Psychology (ICAP), Montréal, Canada, PQ First citation in articleGoogle Scholar

  • Kane, M. (2006). Validation. In R. BrennanEd., Educational measurement (4th ed., pp. 17–64). Westport, CT: American Council on Education and Praeger. First citation in articleGoogle Scholar

  • Kane, M. (2013). Validating the interpretations and uses of test scores. Journal of Educational Measurement, 50, 1–73. https://doi.org/10.1111/jedm.12000 First citation in articleCrossrefGoogle Scholar

  • Markus, K. A. (2018). Three conceptual impediments to developing scale theory for formative scales. Methodology, 14, 156–164. https://doi.org/10.1027/1614-2241/a000154 First citation in articleAbstractGoogle Scholar

  • Markus, K. A. & Borsboom, D. (2013). Frontiers of test validity theory: Measurement, causation, and meaning. Multivariate applications series (1st ed.). New York, NY: Routledge. First citation in articleGoogle Scholar

  • Maul, A. (2017). Rethinking traditional methods of survey validation. Measurement: Interdisciplinary Research and Perspectives, 15, 51–69. https://doi.org/10.1080/15366367.2017.1348108 First citation in articleCrossrefGoogle Scholar

  • Newton, P. E. (2012). Clarifying the consensus definition of validity: Commentary. Measurement: Interdisciplinary Research and Perspectives, 10, 1–29. https://doi.org/10.1080/15366367.2012.669666 First citation in articleCrossrefGoogle Scholar

  • Newton, P. E. & Baird, J. (2016). Editorial: The great validity debate. Assessment in Education: Principles, Policy & Practice, 23, 173–177. https://doi.org/10.1080/0969594X.2016.1172871 First citation in articleCrossrefGoogle Scholar

  • Rammstedt, B. & Beierlein, C. (2014). Can’t we make it any shorter? The limits of personality assessment and way to overcome them. Journal of Individual Differences, 35, 212–220. https://doi.org/10.1027/1614-0001/a000141 First citation in articleLinkGoogle Scholar

  • Rammstedt, B., Beierlein, C., Brähler, E., Eid, M., Hartig, J., Kersting, M., … Weichselgartner, E. (2015). Quality standards for the development, application, and evaluation of measurement instruments in social science survey research: Prepared and written by the Quality Standards Working Group. RatSWD Working Papers 245. Retrieved from http://www.ratswd.de/dl/RatSWD_WP_245.pdf First citation in articleGoogle Scholar

  • Schultze, M. & Eid, M. (2018). Identifying measurement invariant item sets in cross-cultural settings using an automated item selection procedure. Methodology, 14, 177–188. https://doi.org/10.1027/1614-2241/a000155 First citation in articleAbstractGoogle Scholar

  • Tourangeau, R., Rips, L. J. & Rasinski, K. A. (2000). The psychology of survey response. Cambridge, UK: Cambridge University Press. First citation in articleCrossrefGoogle Scholar

  • Zumbo, B. D. (2014). What role does, and should, the test standards play outside of the United States of America? Educational Measurement: Issues and Practice, 33, 31–33. https://doi.org/10.1111/emip.12052 First citation in articleCrossrefGoogle Scholar

  • Zumbo, B. D.Chan, E. K. H. (Eds.). (2014). Validity and validation in social, behavioral, and health sciences. Social Indicators Research Series: Vol. 54. Cham, Switzerland: Springer. First citation in articleGoogle Scholar

  • Zumbo, B. D.Hubley, A. M. (Eds.). (2017). Understanding and investigating response processes in validation research. Social Indicators Research Series: Vol. 54. Cham, Switzerland: Springer. First citation in articleCrossrefGoogle Scholar