Skip to main content
Top
Gepubliceerd in: Quality of Life Research 4/2018

16-12-2017

Careless responding in internet-based quality of life assessments

Auteurs: Stefan Schneider, Marcella May, Arthur A. Stone

Gepubliceerd in: Quality of Life Research | Uitgave 4/2018

Log in om toegang te krijgen
share
DELEN

Deel dit onderdeel of sectie (kopieer de link)

  • Optie A:
    Klik op de rechtermuisknop op de link en selecteer de optie “linkadres kopiëren”
  • Optie B:
    Deel de link per e-mail

Abstract

Purpose

Quality of life (QoL) measurement relies upon participants providing meaningful responses, but not all respondents may pay sufficient attention when completing self-reported QoL measures. This study examined the impact of careless responding on the reliability and validity of Internet-based QoL assessments.

Methods

Internet panelists (n = 2000) completed Patient-Reported Outcomes Measurement Information System (PROMIS®) short-forms (depression, fatigue, pain impact, applied cognitive abilities) and single-item QoL measures (global health, pain intensity) as part of a larger survey that included multiple checks of whether participants paid attention to the items. Latent class analysis was used to identify groups of non-careless and careless responders from the attentiveness checks. Analyses compared psychometric properties of the QoL measures (reliability of PROMIS short-forms, correlations among QoL scores, “known-groups” validity) between non-careless and careless responder groups. Whether person-fit statistics derived from PROMIS measures accurately discriminated careless and non-careless responders was also examined.

Results

About 7.4% of participants were classified as careless responders. No substantial differences in the reliability of PROMIS measures between non-careless and careless responder groups were observed. However, careless responding meaningfully and significantly affected the correlations among QoL domains, as well as the magnitude of differences in QoL between medical and disability groups (presence or absence of disability, depression diagnosis, chronic pain diagnosis). Person-fit statistics significantly and moderately distinguished between non-careless and careless responders.

Conclusions

The results support the importance of identifying and screening out careless responders to ensure high-quality self-report data in Internet-based QoL research.
Voetnoten
1
To evaluate how well 2 selected indicators would recover the full latent class solution, we fitted a series of additional latent class models considering all 21 combinations of indicator pairs. Most indicator pairs were reasonably successful at replicating the class assignments when judged by the proportion of respondents correctly assigned (range 0.80–0.96), positive predictive values (PPV range 0.59–1.0), and negatives predictive values (NPV range 0.94–0.97). Pairs involving “inconsistent age reports” tended to perform the least well. Combinations of “median response time” with a vocabulary item, instructed response item, or figure matching task tended to most successfully recover the class assignments from the full model (proportions correctly assigned ≥ 0.95, PPV > 0.70, NPV > 0.96).
 
Literatuur
1.
go back to reference Cella, D., Gershon, R., Lai, J.-S., & Choi, S. (2007). The future of outcomes measurement: Item banking, tailored short-forms, and computerized adaptive assessment. Quality of Life Research, 16, 133–141.CrossRefPubMed Cella, D., Gershon, R., Lai, J.-S., & Choi, S. (2007). The future of outcomes measurement: Item banking, tailored short-forms, and computerized adaptive assessment. Quality of Life Research, 16, 133–141.CrossRefPubMed
2.
go back to reference Cella, D., Riley, W., Stone, A., Rothrock, N., Reeve, B., Yount, S., et al. (2010). The Patient-Reported Outcomes Measurement Information System (PROMIS) developed and tested its first wave of adult self-reported health outcome item banks: 2005–2008. Journal of Clinical Epidemiology, 63, 1179–1194.CrossRefPubMedCentralPubMed Cella, D., Riley, W., Stone, A., Rothrock, N., Reeve, B., Yount, S., et al. (2010). The Patient-Reported Outcomes Measurement Information System (PROMIS) developed and tested its first wave of adult self-reported health outcome item banks: 2005–2008. Journal of Clinical Epidemiology, 63, 1179–1194.CrossRefPubMedCentralPubMed
4.
go back to reference Liu, H., Cella, D., Gershon, R., Shen, J., Morales, L. S., Riley, W., et al. (2010). Representativeness of the patient-reported outcomes measurement information system internet panel. Journal of Clinical Epidemiology, 63, 1169–1178.CrossRefPubMedCentralPubMed Liu, H., Cella, D., Gershon, R., Shen, J., Morales, L. S., Riley, W., et al. (2010). Representativeness of the patient-reported outcomes measurement information system internet panel. Journal of Clinical Epidemiology, 63, 1169–1178.CrossRefPubMedCentralPubMed
5.
go back to reference Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P. (2012). Sources of method bias in social science research and recommendations on how to control it. Annual Review of Psychology, 63, 539–569.CrossRefPubMed Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P. (2012). Sources of method bias in social science research and recommendations on how to control it. Annual Review of Psychology, 63, 539–569.CrossRefPubMed
6.
go back to reference Krosnick, J. A. (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology, 5, 213–236.CrossRef Krosnick, J. A. (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology, 5, 213–236.CrossRef
7.
go back to reference Johnson, J. A. (2005). Ascertaining the validity of individual protocols from web-based personality inventories. Journal of Research in Personality, 39, 103–129.CrossRef Johnson, J. A. (2005). Ascertaining the validity of individual protocols from web-based personality inventories. Journal of Research in Personality, 39, 103–129.CrossRef
8.
go back to reference Meade, A. W., & Craig, S. B. (2012). Identifying careless responses in survey data. Psychological Methods, 17, 437–455.CrossRefPubMed Meade, A. W., & Craig, S. B. (2012). Identifying careless responses in survey data. Psychological Methods, 17, 437–455.CrossRefPubMed
9.
go back to reference Curran, P. G. (2016). Methods for the detection of carelessly invalid responses in survey data. Journal of Experimental Social Psychology, 66, 4–19.CrossRef Curran, P. G. (2016). Methods for the detection of carelessly invalid responses in survey data. Journal of Experimental Social Psychology, 66, 4–19.CrossRef
10.
go back to reference Godinho, A., Kushnir, V., & Cunningham, J. A. (2016). Unfaithful findings: Identifying careless responding in addictions research. Addiction, 111, 955–956.CrossRefPubMed Godinho, A., Kushnir, V., & Cunningham, J. A. (2016). Unfaithful findings: Identifying careless responding in addictions research. Addiction, 111, 955–956.CrossRefPubMed
11.
go back to reference Oppenheimer, D. M., Meyvis, T., & Davidenko, N. (2009). Instructional manipulation checks: Detecting satisficing to increase statistical power. Journal of Experimental Social Psychology, 45, 867–872.CrossRef Oppenheimer, D. M., Meyvis, T., & Davidenko, N. (2009). Instructional manipulation checks: Detecting satisficing to increase statistical power. Journal of Experimental Social Psychology, 45, 867–872.CrossRef
12.
go back to reference Huang, J. L., Curran, P. G., Keeney, J., Poposki, E. M., & DeShon, R. P. (2012). Detecting and deterring insufficient effort responding to surveys. Journal of Business and Psychology, 27, 99–114.CrossRef Huang, J. L., Curran, P. G., Keeney, J., Poposki, E. M., & DeShon, R. P. (2012). Detecting and deterring insufficient effort responding to surveys. Journal of Business and Psychology, 27, 99–114.CrossRef
13.
go back to reference Maniaci, M. R., & Rogge, R. D. (2014). Caring about carelessness: Participant inattention and its effects on research. Journal of Research in Personality, 48, 61–83.CrossRef Maniaci, M. R., & Rogge, R. D. (2014). Caring about carelessness: Participant inattention and its effects on research. Journal of Research in Personality, 48, 61–83.CrossRef
14.
go back to reference McGrath, R. E., Mitchell, M., Kim, B. H., & Hough, L. (2010). Evidence for response bias as a source of error variance in applied assessment. Psychological Bulletin, 136, 450–470.CrossRefPubMed McGrath, R. E., Mitchell, M., Kim, B. H., & Hough, L. (2010). Evidence for response bias as a source of error variance in applied assessment. Psychological Bulletin, 136, 450–470.CrossRefPubMed
15.
go back to reference Piedmont, R. L., McCrae, R. R., Riemann, R., & Angleitner, A. (2000). On the invalidity of validity scales: Evidence from self-reports and observer ratings in volunteer samples. Journal of Personality and Social Psychology, 78, 582–593.CrossRefPubMed Piedmont, R. L., McCrae, R. R., Riemann, R., & Angleitner, A. (2000). On the invalidity of validity scales: Evidence from self-reports and observer ratings in volunteer samples. Journal of Personality and Social Psychology, 78, 582–593.CrossRefPubMed
16.
go back to reference Osborne, J. W., & Blanchard, M. R. (2010). Random responding from participants is a threat to the validity of social science research results. Frontiers in Psychology, 1, 220.PubMed Osborne, J. W., & Blanchard, M. R. (2010). Random responding from participants is a threat to the validity of social science research results. Frontiers in Psychology, 1, 220.PubMed
17.
go back to reference Miura, A., & Kobayashi, T. (2016). Survey satisficing inflates stereotypical responses in online experiment: The case of immigration study. Frontiers in Psychology, 7, 1563.PubMedCentralPubMed Miura, A., & Kobayashi, T. (2016). Survey satisficing inflates stereotypical responses in online experiment: The case of immigration study. Frontiers in Psychology, 7, 1563.PubMedCentralPubMed
18.
go back to reference Ward, M. K., & Pond, S. B. (2015). Using virtual presence and survey instructions to minimize careless responding on internet-based surveys. Computers in Human Behavior, 48, 554–568.CrossRef Ward, M. K., & Pond, S. B. (2015). Using virtual presence and survey instructions to minimize careless responding on internet-based surveys. Computers in Human Behavior, 48, 554–568.CrossRef
19.
go back to reference Fervaha, G., & Remington, G. (2013). Invalid responding in questionnaire-based research: Implications for the study of schizotypy. Psychological Assessment, 25, 1355–1360.CrossRefPubMed Fervaha, G., & Remington, G. (2013). Invalid responding in questionnaire-based research: Implications for the study of schizotypy. Psychological Assessment, 25, 1355–1360.CrossRefPubMed
20.
go back to reference Reeve, B. B., Hays, R. D., Bjorner, J. B., Cook, K. F., Crane, P. K., Teresi, J. A., et al. (2007). Psychometric evaluation and calibration of health-related quality of life item banks - Plans for the patient-reported outcomes measurement information system (PROMIS). Medical Care, 45, S22–S31.CrossRef Reeve, B. B., Hays, R. D., Bjorner, J. B., Cook, K. F., Crane, P. K., Teresi, J. A., et al. (2007). Psychometric evaluation and calibration of health-related quality of life item banks - Plans for the patient-reported outcomes measurement information system (PROMIS). Medical Care, 45, S22–S31.CrossRef
21.
go back to reference Zhao, Y. (2017). Impact of IRT item misfit on score estimates and severity classifications: An examination of PROMIS depression and pain interference item banks. Quality of Life Research, 26, 555–564.CrossRefPubMed Zhao, Y. (2017). Impact of IRT item misfit on score estimates and severity classifications: An examination of PROMIS depression and pain interference item banks. Quality of Life Research, 26, 555–564.CrossRefPubMed
22.
go back to reference Mokkink, L. B., Terwee, C. B., Stratford, P. W., Alonso, J., Patrick, D. L., Riphagen, I., et al. (2009). Evaluation of the methodological quality of systematic reviews of health status measurement instruments. Quality of Life Research, 18, 313–333.CrossRefPubMed Mokkink, L. B., Terwee, C. B., Stratford, P. W., Alonso, J., Patrick, D. L., Riphagen, I., et al. (2009). Evaluation of the methodological quality of systematic reviews of health status measurement instruments. Quality of Life Research, 18, 313–333.CrossRefPubMed
23.
go back to reference Emons, W. H. (2008). Nonparametric person-fit analysis of polytomous item scores. Applied Psychological Measurement, 32, 224–247.CrossRef Emons, W. H. (2008). Nonparametric person-fit analysis of polytomous item scores. Applied Psychological Measurement, 32, 224–247.CrossRef
24.
go back to reference Woods, C. M., Oltmanns, T. F., & Turkheimer, E. (2008). Detection of aberrant responding on a personality scale in a military sample: An application of evaluating person fit with two-level logistic regression. Psychological Assessment, 20, 159–168.CrossRefPubMedCentralPubMed Woods, C. M., Oltmanns, T. F., & Turkheimer, E. (2008). Detection of aberrant responding on a personality scale in a military sample: An application of evaluating person fit with two-level logistic regression. Psychological Assessment, 20, 159–168.CrossRefPubMedCentralPubMed
25.
go back to reference Schneider, S. (2017). Careless responding. osf.io/um9d3. Schneider, S. (2017). Careless responding. osf.io/um9d3.
26.
go back to reference Ekstrom, R. B., French, J. W., Harman, H. H., & Dermen, D. (1976). Manual for kit of factor-referenced cognitive tests. Princeton, NJ: Educational Testing Service. Ekstrom, R. B., French, J. W., Harman, H. H., & Dermen, D. (1976). Manual for kit of factor-referenced cognitive tests. Princeton, NJ: Educational Testing Service.
27.
go back to reference Lindenberger, U., Mayr, U., & Kliegl, R. (1993). Speed and intelligence in old age. Psychology and Aging, 8, 207–220.CrossRefPubMed Lindenberger, U., Mayr, U., & Kliegl, R. (1993). Speed and intelligence in old age. Psychology and Aging, 8, 207–220.CrossRefPubMed
28.
go back to reference Wise, S. L., & Kong, X. (2005). Response time effort: A new measure of examinee motivation in computer-based tests. Applied Measurement in Education, 18, 163–183.CrossRef Wise, S. L., & Kong, X. (2005). Response time effort: A new measure of examinee motivation in computer-based tests. Applied Measurement in Education, 18, 163–183.CrossRef
29.
go back to reference Pilkonis, P. A., Choi, S. W., Reise, S. P., Stover, A. M., Riley, W. T., Cella, D., et al. (2011). Item banks for measuring emotional distress from the Patient-Reported Outcomes Measurement Information System (PROMIS (R)): Depression, anxiety, and anger. Assessment, 18, 263–283.CrossRefPubMedCentralPubMed Pilkonis, P. A., Choi, S. W., Reise, S. P., Stover, A. M., Riley, W. T., Cella, D., et al. (2011). Item banks for measuring emotional distress from the Patient-Reported Outcomes Measurement Information System (PROMIS (R)): Depression, anxiety, and anger. Assessment, 18, 263–283.CrossRefPubMedCentralPubMed
30.
go back to reference Lai, J. S., Cella, D., Choi, S., Junghaenel, D. U., Christodoulou, C., Gershon, R., et al. (2011). How item banks and their application can influence measurement practice in rehabilitation medicine: A PROMIS fatigue item bank example. Archives of Physical Medicine and Rehabilitation, 92, S20–S27.CrossRefPubMedCentralPubMed Lai, J. S., Cella, D., Choi, S., Junghaenel, D. U., Christodoulou, C., Gershon, R., et al. (2011). How item banks and their application can influence measurement practice in rehabilitation medicine: A PROMIS fatigue item bank example. Archives of Physical Medicine and Rehabilitation, 92, S20–S27.CrossRefPubMedCentralPubMed
31.
go back to reference Amtmann, D., Cook, K. F., Jensen, M. P., Chen, W. H., Choi, S., Revicki, D., et al. (2010). Development of a PROMIS item bank to measure pain interference. Pain, 150, 173–182.CrossRefPubMedCentralPubMed Amtmann, D., Cook, K. F., Jensen, M. P., Chen, W. H., Choi, S., Revicki, D., et al. (2010). Development of a PROMIS item bank to measure pain interference. Pain, 150, 173–182.CrossRefPubMedCentralPubMed
32.
go back to reference Becker, H., Stuifbergen, A., Lee, H., & Kullberg, V. (2014). Reliability and validity of PROMIS cognitive abilities and cognitive concerns scales among people with multiple sclerosis. International Journal of MS Care, 16, 1–8.CrossRefPubMedCentralPubMed Becker, H., Stuifbergen, A., Lee, H., & Kullberg, V. (2014). Reliability and validity of PROMIS cognitive abilities and cognitive concerns scales among people with multiple sclerosis. International Journal of MS Care, 16, 1–8.CrossRefPubMedCentralPubMed
33.
go back to reference Ware, J. E. Jr., & Sherbourne, C. D. (1992). The MOS 36-item short-form health survey (SF-36): I. Conceptual framework and item selection. Medical Care, 30, 473–483.CrossRefPubMed Ware, J. E. Jr., & Sherbourne, C. D. (1992). The MOS 36-item short-form health survey (SF-36): I. Conceptual framework and item selection. Medical Care, 30, 473–483.CrossRefPubMed
34.
go back to reference Cleeland, C. (1994). Pain assessment: Global use of the Brief Pain Inventory. Annals of Academic Medicine Singapore, 23, 129–138. Cleeland, C. (1994). Pain assessment: Global use of the Brief Pain Inventory. Annals of Academic Medicine Singapore, 23, 129–138.
35.
go back to reference Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6, 1–55.CrossRef Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6, 1–55.CrossRef
36.
go back to reference Muthén, B. (2004). Latent variable analysis: Growth mixture modeling and related techniques for longitudinal data. In D. Kaplan (Ed.), The Sage handbook of quantitative methodology for the social sciences (pp. 345–369). Thousand Oaks, CA: Sage. Muthén, B. (2004). Latent variable analysis: Growth mixture modeling and related techniques for longitudinal data. In D. Kaplan (Ed.), The Sage handbook of quantitative methodology for the social sciences (pp. 345–369). Thousand Oaks, CA: Sage.
37.
go back to reference Nylund, K. L., Asparoutiov, T., & Muthen, B. O. (2007). Deciding on the number of classes in latent class analysis and growth mixture modeling: A Monte Carlo simulation study. Structural Equation Modeling-A Multidisciplinary Journal, 14, 535–569.CrossRef Nylund, K. L., Asparoutiov, T., & Muthen, B. O. (2007). Deciding on the number of classes in latent class analysis and growth mixture modeling: A Monte Carlo simulation study. Structural Equation Modeling-A Multidisciplinary Journal, 14, 535–569.CrossRef
38.
go back to reference Muthén, B. (2003). Statistical and substantive checking in growth mixture modeling: Comment on Bauer and Curran (2003). Psychological Methods, 8, 369–377.CrossRefPubMed Muthén, B. (2003). Statistical and substantive checking in growth mixture modeling: Comment on Bauer and Curran (2003). Psychological Methods, 8, 369–377.CrossRefPubMed
39.
go back to reference Muthén, L. K., & Muthén, B. O. (2017). Mplus user’s guide (7th edn.). Los Angeles, CA: Muthén & Muthén. Muthén, L. K., & Muthén, B. O. (2017). Mplus user’s guide (7th edn.). Los Angeles, CA: Muthén & Muthén.
40.
go back to reference Cohen, J., & Cohen, P. (1983). Applied multiple regression/correlation analysis for the behavioral sciences. Hillsdale, NJ: Erlbaum. Cohen, J., & Cohen, P. (1983). Applied multiple regression/correlation analysis for the behavioral sciences. Hillsdale, NJ: Erlbaum.
41.
go back to reference Cohen, J. (1988). Statistical power analysis for the behavioral sciences. New York, NY: Erlbaum. Cohen, J. (1988). Statistical power analysis for the behavioral sciences. New York, NY: Erlbaum.
42.
go back to reference Tendeiro, J. N., Meijer, R. R., & Niessen, A. S. M. (2015). PerFit: An R package for person-fit analysis in IRT. Journal of Statistical Software, 74, 1–27. Tendeiro, J. N., Meijer, R. R., & Niessen, A. S. M. (2015). PerFit: An R package for person-fit analysis in IRT. Journal of Statistical Software, 74, 1–27.
43.
go back to reference Meijer, R. R., & Sijtsma, K. (2001). Methodology review: Evaluating person fit. Applied Psychological Measurement, 25, 107–135.CrossRef Meijer, R. R., & Sijtsma, K. (2001). Methodology review: Evaluating person fit. Applied Psychological Measurement, 25, 107–135.CrossRef
44.
go back to reference Greiner, M., Pfeiffer, D., & Smith, R. (2000). Principles and practical application of the receiver-operating characteristic analysis for diagnostic tests. Preventive Veterinary Medicine, 45, 23–41.CrossRefPubMed Greiner, M., Pfeiffer, D., & Smith, R. (2000). Principles and practical application of the receiver-operating characteristic analysis for diagnostic tests. Preventive Veterinary Medicine, 45, 23–41.CrossRefPubMed
45.
46.
go back to reference Celeux, G., & Soromenho, G. (1996). An entropy criterion for assessing the number of clusters in a mixture model. Journal of Classification, 13, 195–212.CrossRef Celeux, G., & Soromenho, G. (1996). An entropy criterion for assessing the number of clusters in a mixture model. Journal of Classification, 13, 195–212.CrossRef
48.
go back to reference Cook, K. F., Kallen, M. A., & Amtmann, D. (2009). Having a fit: Impact of number of items and distribution of data on traditional criteria for assessing IRT’s unidimensionality assumption. Quality of Life Research, 18, 447–460.CrossRefPubMedCentralPubMed Cook, K. F., Kallen, M. A., & Amtmann, D. (2009). Having a fit: Impact of number of items and distribution of data on traditional criteria for assessing IRT’s unidimensionality assumption. Quality of Life Research, 18, 447–460.CrossRefPubMedCentralPubMed
49.
go back to reference DeLong, E. R., DeLong, D. M., & Clarke-Pearson, D. L. (1988). Comparing the areas under two or more correlated receiver operating characteristic curves: A nonparametric approach. Biometrics, 44, 837–845.CrossRefPubMed DeLong, E. R., DeLong, D. M., & Clarke-Pearson, D. L. (1988). Comparing the areas under two or more correlated receiver operating characteristic curves: A nonparametric approach. Biometrics, 44, 837–845.CrossRefPubMed
50.
go back to reference Credé, M. (2010). Random responding as a threat to the validity of effect size estimates in correlational research. Educational and Psychological Measurement, 70, 596–612.CrossRef Credé, M. (2010). Random responding as a threat to the validity of effect size estimates in correlational research. Educational and Psychological Measurement, 70, 596–612.CrossRef
51.
go back to reference Van Vaerenbergh, Y., & Thomas, T. D. (2013). Response styles in survey research: A literature review of antecedents, consequences, and remedies. International Journal of Public Opinion Research, 25, 195–217.CrossRef Van Vaerenbergh, Y., & Thomas, T. D. (2013). Response styles in survey research: A literature review of antecedents, consequences, and remedies. International Journal of Public Opinion Research, 25, 195–217.CrossRef
52.
go back to reference Schneider, S. (2018). Extracting response style bias from measures of positive and negative affect in aging research. The Journals of Gerontology Series B: Psychological Sciences and Social Sciences, 73, 64–74. Schneider, S. (2018). Extracting response style bias from measures of positive and negative affect in aging research. The Journals of Gerontology Series B: Psychological Sciences and Social Sciences, 73, 64–74.
53.
go back to reference Fong, D. Y., Ho, S., & Lam, T. (2010). Evaluation of internal reliability in the presence of inconsistent responses. Health and Quality of Life Outcomes, 8, 27.CrossRefPubMedCentralPubMed Fong, D. Y., Ho, S., & Lam, T. (2010). Evaluation of internal reliability in the presence of inconsistent responses. Health and Quality of Life Outcomes, 8, 27.CrossRefPubMedCentralPubMed
54.
go back to reference Callegaro, M., Villar, A., Krosnick, J., & Yeager, D. (2014). A critical review of studies investigating the quality of data obtained with online panels. In M. Callegaro, R. Baker, J. Bethlehem, A. S. Göritz, J. A. Krosnick & P. J. Lavrakas (Eds.), Online panel research: A data quality perspective. Hoboken, NJ: Wiley.CrossRef Callegaro, M., Villar, A., Krosnick, J., & Yeager, D. (2014). A critical review of studies investigating the quality of data obtained with online panels. In M. Callegaro, R. Baker, J. Bethlehem, A. S. Göritz, J. A. Krosnick & P. J. Lavrakas (Eds.), Online panel research: A data quality perspective. Hoboken, NJ: Wiley.CrossRef
55.
go back to reference Bowling, N. A., Huang, J. L., Bragg, C. B., Khazon, S., Liu, M., & Blackmore, C. E. (2016). Who cares and who is careless? Insufficient effort responding as a reflection of respondent personality. Journal of Personality and Social Psychology, 111, 218–229.CrossRefPubMed Bowling, N. A., Huang, J. L., Bragg, C. B., Khazon, S., Liu, M., & Blackmore, C. E. (2016). Who cares and who is careless? Insufficient effort responding as a reflection of respondent personality. Journal of Personality and Social Psychology, 111, 218–229.CrossRefPubMed
Metagegevens
Titel
Careless responding in internet-based quality of life assessments
Auteurs
Stefan Schneider
Marcella May
Arthur A. Stone
Publicatiedatum
16-12-2017
Uitgeverij
Springer International Publishing
Gepubliceerd in
Quality of Life Research / Uitgave 4/2018
Print ISSN: 0962-9343
Elektronisch ISSN: 1573-2649
DOI
https://doi.org/10.1007/s11136-017-1767-2

Andere artikelen Uitgave 4/2018

Quality of Life Research 4/2018 Naar de uitgave