Skip to main content
Top

2014 | OriginalPaper | Hoofdstuk

2. Reproduceerbaarheid

Auteurs : Arianne Verhagen, Jeroen Alessie

Gepubliceerd in: Evidence based diagnostiek van het bewegingsapparaat

Uitgeverij: Bohn Stafleu van Loghum

share
DELEN

Deel dit onderdeel of sectie (kopieer de link)

  • Optie A:
    Klik op de rechtermuisknop op de link en selecteer de optie “linkadres kopiëren”
  • Optie B:
    Deel de link per e-mail

Samenvatting

Dit hoofdstuk behandelt alle facetten van de reproduceerbaarheid van diagnostische tests waar een fysiotherapeut mee te maken kan krijgen. Kennis over kenmerken van de reproduceerbaarheid (betrouwbaarheid) is handig in de dagelijkse praktijk en wanneer hij/zij een wetenschappelijk artikel leest of in de literatuur op zoek gaat naar een goede diagnostische test om een klinisch probleem te benaderen. Verschillende vormen van reproduceerbaarheid worden uitgelegd en begrippen als overeenstemming, Kappa en correlatiecoëfficiënten komen aan bod. Tevens wordt er aandacht besteed aan de interpretatie van al die begrippen.
Literatuur
go back to reference Bland JM & Altman DG. (1986) Statistical methods for assessing agreement between two methods of clinical measurement. Lancet 1986;i:307–10. Bland JM & Altman DG. (1986) Statistical methods for assessing agreement between two methods of clinical measurement. Lancet 1986;i:307–10.
go back to reference Bouter LM & van Dongen MJCM. (2005) Epidemiologisch onderzoek; opzet en interpretatie. Vijfde druk. Houten: Bohn Stafleu Van Loghum, 2005 Bouter LM & van Dongen MJCM. (2005) Epidemiologisch onderzoek; opzet en interpretatie. Vijfde druk. Houten: Bohn Stafleu Van Loghum, 2005
go back to reference Bruton, A, Conway, J H & Holgate, S T (2000). ‘Reliability: What is it and how is it measured?’ Physiotherapy, 86,2,94–99.CrossRef Bruton, A, Conway, J H & Holgate, S T (2000). ‘Reliability: What is it and how is it measured?’ Physiotherapy, 86,2,94–99.CrossRef
go back to reference Cicchetti DV & Feinstein AR. (1990) High agreement but low kappa, II: resolving the paradoxes. J Clin Epidemiol. 1990;43:551–558.CrossRefPubMed Cicchetti DV & Feinstein AR. (1990) High agreement but low kappa, II: resolving the paradoxes. J Clin Epidemiol. 1990;43:551–558.CrossRefPubMed
go back to reference Cohen J. (1960) A coefficient of agreement for nominal scales. Educ Psychol Meas 1960;20:37–46.CrossRef Cohen J. (1960) A coefficient of agreement for nominal scales. Educ Psychol Meas 1960;20:37–46.CrossRef
go back to reference Cohen J. (1968) Weighed kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psych Bull 1968;70(4):213–220.CrossRef Cohen J. (1968) Weighed kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psych Bull 1968;70(4):213–220.CrossRef
go back to reference De Vet HC, Terwee CB, Knol DL & Bouter LM. (2006) When to use agreement versus reliability measures. J Clin Epidemiol. 2006;59(10):1033–9.CrossRefPubMed De Vet HC, Terwee CB, Knol DL & Bouter LM. (2006) When to use agreement versus reliability measures. J Clin Epidemiol. 2006;59(10):1033–9.CrossRefPubMed
go back to reference De Vet HCW, Mokkink LB, Terwee CB, Hoekstra OS & Knol DL. (2013) Clinicians are right not to like Cohen’s K. BMJ 2013;346:f2125 De Vet HCW, Mokkink LB, Terwee CB, Hoekstra OS & Knol DL. (2013) Clinicians are right not to like Cohen’s K. BMJ 2013;346:f2125
go back to reference De Vet HCW, Terwee CB, Mokkink LB & Knol DL. (2011) Measurement in medicine; a practical guide. UK, Cambridge University Press 2011.CrossRef De Vet HCW, Terwee CB, Mokkink LB & Knol DL. (2011) Measurement in medicine; a practical guide. UK, Cambridge University Press 2011.CrossRef
go back to reference Fleiss JL. (1981) Statistical methods for rates and proportions.2nd ed. Wiley series in probability and mathematical statistics. New York: Wiley, 1981. Fleiss JL. (1981) Statistical methods for rates and proportions.2nd ed. Wiley series in probability and mathematical statistics. New York: Wiley, 1981.
go back to reference Landis J.R. & Koch G.G. (1977) The measurement of observer agreement for categorical data. Biometrics 1977;33 (1): 159–174.CrossRefPubMed Landis J.R. & Koch G.G. (1977) The measurement of observer agreement for categorical data. Biometrics 1977;33 (1): 159–174.CrossRefPubMed
go back to reference Müller R & Büttner P. (1994) A critical discussion of intraclass correlation coefficients. Stat Med 1994;13:2465–2476.CrossRefPubMed Müller R & Büttner P. (1994) A critical discussion of intraclass correlation coefficients. Stat Med 1994;13:2465–2476.CrossRefPubMed
go back to reference Shrout PE & Fleiss JL. (1979) Intraclass correlations: uses in assessing rater reliability. Psych Bull 1979;86:420–428.CrossRef Shrout PE & Fleiss JL. (1979) Intraclass correlations: uses in assessing rater reliability. Psych Bull 1979;86:420–428.CrossRef
go back to reference Sim J & Wright CC. (2005) The kappa statistic in reliability studies: use, interpretation, and sample size requirements. Phys Ther. 2005 Mar;85(3):257–68.PubMed Sim J & Wright CC. (2005) The kappa statistic in reliability studies: use, interpretation, and sample size requirements. Phys Ther. 2005 Mar;85(3):257–68.PubMed
Metagegevens
Titel
Reproduceerbaarheid
Auteurs
Arianne Verhagen
Jeroen Alessie
Copyright
2014
Uitgeverij
Bohn Stafleu van Loghum
DOI
https://doi.org/10.1007/978-90-368-0821-7_2