Intended for healthcare professionals

Education And Debate

Sifting the evidence—what's wrong with significance tests?Another comment on the role of statistical methods

BMJ 2001; 322 doi: https://doi.org/10.1136/bmj.322.7280.226 (Published 27 January 2001) Cite this as: BMJ 2001;322:226
  1. Jonathan A C Sterne (jonathan.sterne@bristol.ac.uk), senior lecturer in medical statistics,
  2. George Davey Smith, professor of clinical epidemiology
  1. Department of Social Medicine, University of Bristol, Bristol BS8 2PR
  2. Nuffield College, Oxford OX1 1NF
  1. Correspondence to: J Sterne
  • Accepted 9 November 2000

The findings of medical research are often met with considerable scepticism, even when they have apparently come from studies with sound methodologies that have been subjected to appropriate statistical analysis. This is perhaps particularly the case with respect to epidemiological findings that suggest that some aspect of everyday life is bad for people. Indeed, one recent popular history, the medical journalist James Le Fanu's The Rise and Fall of Modern Medicine, went so far as to suggest that the solution to medicine's ills would be the closure of all departments of epidemiology.1

One contributory factor is that the medical literature shows a strong tendency to accentuate the positive; positive outcomes are more likely to be reported than null results.24 By this means alone a host of purely chance findings will be published, as by conventional reasoning examining 20 associations will produce one result that is “significant at P=0.05” by chance alone. If only positive findings are published then they may be mistakenly considered to be of importance rather than being the necessary chance results produced by the application of criteria for meaningfulness based on statistical significance. As many studies contain long questionnaires collecting information on hundreds of variables, and measure a wide range of potential outcomes, several false positive findings are virtually guaranteed. The high volume and often contradictory nature5 of medical research findings, however, is not only because of publication bias. A more fundamental problem is the widespread misunderstanding of the nature of statistical significance.

Summary points

P values, or significance levels, measure the strength of the evidence against the null hypothesis; the smaller the P value, the stronger the evidence against the null hypothesis

An arbitrary division of results, into “significant” or “non-significant” according to the P value, was not the intention of the …

View Full Text

Log in

Log in through your institution

Subscribe

* For online subscription