Skip to main content
Top

2018 | OriginalPaper | Hoofdstuk

2. Reproducibility

Auteurs : Dr. Arianne Verhagen, Drs. Jeroen Alessie

Gepubliceerd in: Evidence based diagnostics of musculoskeletal disorders in primary care

Uitgeverij: Bohn Stafleu van Loghum

share
DELEN

Deel dit onderdeel of sectie (kopieer de link)

  • Optie A:
    Klik op de rechtermuisknop op de link en selecteer de optie “linkadres kopiëren”
  • Optie B:
    Deel de link per e-mail

Abstract

This chapter covers all aspects of reproducibility of diagnostic tests relevant for physiotherapists in daily practice. In daily practice knowledge about reproducibility of tests is relevant, especially when one reads a research paper, or searches the literature for clinical relevant diagnostic tools to tackle a clinical problem. I explain different elements of reproducibility, such as agreement, Kappa and correlation coefficients as well as the interpretation of these concepts.
Literatuur
go back to reference Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet. 1986;327:307–10.CrossRef Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet. 1986;327:307–10.CrossRef
go back to reference Bouter LM, Dongen MCJM, Zielhuis GA, Zeegers MPA. Leerboek epidemiologie. 7e druk. Houten: Bohn Stafleu van Loghum; 2016. Bouter LM, Dongen MCJM, Zielhuis GA, Zeegers MPA. Leerboek epidemiologie. 7e druk. Houten: Bohn Stafleu van Loghum; 2016.
go back to reference Brennan P, Silman A. Statistical methods for assessing observer variability in clinical measures. BMJ. 1992;304:1491–4.CrossRef Brennan P, Silman A. Statistical methods for assessing observer variability in clinical measures. BMJ. 1992;304:1491–4.CrossRef
go back to reference Bruton A, Conway JH, Holgate ST. Reliability: what is it and how is it measured? Physiotherapy. 2000;86(2):94–9.CrossRef Bruton A, Conway JH, Holgate ST. Reliability: what is it and how is it measured? Physiotherapy. 2000;86(2):94–9.CrossRef
go back to reference Cicchetti DV, Feinstein AR. High agreement but low kappa, II: resolving the paradoxes. J Clin Epidemiol. 1990;50(43):551–8.CrossRef Cicchetti DV, Feinstein AR. High agreement but low kappa, II: resolving the paradoxes. J Clin Epidemiol. 1990;50(43):551–8.CrossRef
go back to reference Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas. 1960;20:37–46.CrossRef Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas. 1960;20:37–46.CrossRef
go back to reference Cohen J. Weighed kappa: nominal scale agreement with provision for scaled disagreement or partial credit. 1968;70(4):213–20. Cohen J. Weighed kappa: nominal scale agreement with provision for scaled disagreement or partial credit. 1968;70(4):213–20.
go back to reference Fleiss JL. Statistical methods for rates and proportions. 2nd ed. Wiley series in probability and mathematical statistics. New York: Wiley; 1981. Fleiss JL. Statistical methods for rates and proportions. 2nd ed. Wiley series in probability and mathematical statistics. New York: Wiley; 1981.
go back to reference Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33(1):159–74.CrossRef Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33(1):159–74.CrossRef
go back to reference Shrout PE, Fleiss JL. Intraclass correlations: uses in assessing rater reliability. Psych Bull. 1979;86:420–8.CrossRef Shrout PE, Fleiss JL. Intraclass correlations: uses in assessing rater reliability. Psych Bull. 1979;86:420–8.CrossRef
go back to reference Sim J, Wright CC. The kappa statistic in reliability studies: use, interpretation, and sample size requirements. Phys Ther. 2005;85(3):257–68.CrossRef Sim J, Wright CC. The kappa statistic in reliability studies: use, interpretation, and sample size requirements. Phys Ther. 2005;85(3):257–68.CrossRef
go back to reference Vet HCW de, Mokkink LB, Terwee CB, Hoekstra OS, Knol DL. Clinicians are right not to like Cohen’s K. BMJ. 2013;346:f2125. Vet HCW de, Mokkink LB, Terwee CB, Hoekstra OS, Knol DL. Clinicians are right not to like Cohen’s K. BMJ. 2013;346:f2125.
go back to reference Vet HC de, Terwee CB, Knol DL, Bouter LM. When to use agreement versus reliability measures. J Clin Epidemiol. 2006;59(10):1033–9. Vet HC de, Terwee CB, Knol DL, Bouter LM. When to use agreement versus reliability measures. J Clin Epidemiol. 2006;59(10):1033–9.
go back to reference Vet HCW de, Terwee CB, Mokkink LB, Knol DL. Measurement in medicine: a practical guide. UK: Cambridge University Press; 2011. Vet HCW de, Terwee CB, Mokkink LB, Knol DL. Measurement in medicine: a practical guide. UK: Cambridge University Press; 2011.
Metagegevens
Titel
Reproducibility
Auteurs
Dr. Arianne Verhagen
Drs. Jeroen Alessie
Copyright
2018
Uitgeverij
Bohn Stafleu van Loghum
DOI
https://doi.org/10.1007/978-90-368-2146-9_2