Original articles
Publication bias in meta-analysis: its causes and consequences

https://doi.org/10.1016/S0895-4356(99)00161-4Get rights and content

Abstract

Publication bias is a widespread problem that may seriously distort attempts to estimate the effect under investigation. The literature is reviewed to determine features of the design and execution of both single studies and meta-analyses leading to publication bias, and the role the author, journal editor, and reviewer play in selecting studies for publication. Methods of detecting, correcting for, and preventing publication bias are reviewed. The design of the meta-analysis itself, and the studies included in it, are shown to be important among a number of sources of publication bias. Various factors influence an author's decision to submit results for publication. Journal editors and reviewers are crucial in deciding which studies to publish. Various methods proposed for detecting and correcting for publication bias, though useful, all have limitations. However, prevention of publication bias by registering every trial undertaken or publishing all studies is an ideal that is hard to achieve.

Introduction

The “dictionary of epidemiology” [1] defines publication bias as “an editorial predilection for publishing particular findings, e.g., positive results, which leads to the failure of authors to submit negative findings for publication.” Rosenthal, in his “file drawer problem,” described an extreme view in which journals are filled with the 5% of studies showing a false-positive result, the other 95%, showing nonsignificant results (at P < 0.05), being left to fill file drawers [2]. Awareness of publication bias began in 1956 when the editor of the Journal of Abnormal Social Psychology indicated that negative studies were less likely to be published in his journal [3]. In 1959, it was found that very few negative results were reported in four psychological journals, a finding regarded as strongly suggesting publication bias [4]. However, no attempt was made to quantify the problem until 1964 [5]. The existence of publication bias is now widely accepted. Attempts to summarize evidence relating to a specific hypothesis, whether by narrative review or meta-analysis, can be seriously distorted by publication bias. For example, one recent analysis estimated that 45% of an observed association could be due to publication bias [6].

This article aims to explore publication bias and issues related to it, and the effect it may have on attempts to review evidence relating to various hypotheses. Features of the design and execution of both single studies and meta-analyses that may lead to publication bias are examined, along with factors that may influence the author's decision to submit his results for publication. The role of journal editors and reviewers in deciding which studies to publish is also considered. Methods aimed at confirming the existence of, correcting for, and preventing publication bias are reviewed. It is shown that one can estimate the extent to which such a bias may have occurred, and even correct for it, helping authors of future reviews not only to be fully aware of the problem, but also to take steps to minimize it.

Section snippets

Publication bias arising from the design or execution of single studies

Several facets of the design or execution of a study, including sample size and the method of reporting the data, may lead to publication bias. The investigator's own beliefs and expectations may also influence the outcome. A small sample size leads to lack of power [7], and significance may then only be obtained if chance exaggerates any true differences between the groups under study [8]. Though the obvious likely effect of inadequate sample size is failure to demonstrate statistical

Publication bias arising from the researcher deciding whether or not to submit results

An early study found that dissertations and theses were three or four times more likely to be published if they were positive than if they were negative [5]. Such findings may be more because researchers decide not to submit their findings than because journal editors reject their papers 7, 11, 18, 19, 20, 21, statistically significantly positive studies being up to 10 times more likely to be submitted for publication 13, 22. The main reasons given for nonsubmission of studies are the negative

Publication bias arising from the tendency of journals to reject negative studies

Some editors and reviewers strongly dislike negative studies 7, 8, 20, 25. The British Medical Journal states that “negative results have never made rivetting reading.” Their ideal article is one that affects clinical practice, improves prognosis, or simplifies management [19]. While some negative reports may legitimately be rejected due to poor quality 3, 19, even negative studies that appear to be better conducted than positive ones may be much less likely to be accepted for publication [25].

Sponsorship

A study's source of funding may also unduly influence the probability of subsequent publication of the results. For instance, studies showing no association between exposure and disease may be published by groups with a presumed special interest in demonstrating a lack of causation,such as the companies that introduced the risk factor 13, 29. Similarly, reports submitted to governments by Scandinavian pharmaceutical companies showed a lower proportion of published than unpublished studies

Bias arising from the design and execution of reviews and meta-analyses

There are likely to be unpublished studies relevant to any given hypothesis. As published studies may systematically differ from unpublished ones 31, 32, reviews or meta-analyses based only on published data may reach misleading conclusions [33]. It is widely thought, therefore, that as many studies as possible should be included, both published and unpublished 22, 31, 34, 35, 36.

However, there are some problems with this simple view. Firstly, it should be noted that it is often impossible to

Methods of detecting and correcting for publication bias

As publication bias may seriously distort the findings of a meta-analysis, various methods have been devised for detecting its presence. Each of the methods is described below, in some cases with examples of its use, its chief advantages and limitations being listed in Table 1.

Registries

Identifying published trials through the use of literature searches and computer databases is relatively straightforward, but information on unpublished trials is not as readily available. The use of registries has been advocated to overcome this, and registries already exist in the fields of perinatal medicine, cancer and acquired immunodeficiency syndrome treatment, and antithrombotic trials 19, 33, 68. As registration usually occurs before results are known a complete database of all trials

Conclusions

Publication bias appears to be a widespread problem in the scientific literature, and has been demonstrated in many fields of research. Various aspects of the design and execution of both single studies and meta-analyses may increase the probability of bias of this type, and its occurrence may seriously distort any attempts to derive valid estimates by pooling data from a group of studies, skewing the outcome towards positive results. Although various methods have been proposed for determining

Acknowledgements

We thank Mrs P.J. Wassell and Mrs D.P. Morris for their assistance in the typing of this manuscript, and Mrs B.A. Forey for preparing the simulated funnel plot. Financial support was provided by Philip Morris Europe, to whom we are also grateful.

References (75)

  • D. Moher et al.

    Completeness of reporting trials published in languages other than Englishimplications for conduct and reporting of systematic reviews

    Lancet

    (1996)
  • G. Grégoire et al.

    Selecting the language of the publications included in a meta-analysisis there a Tower of Babel bias?

    J Clin Epidemiol

    (1995)
  • A. Vickers et al.

    Do certain countries produce only positive results? A systematic review of controlled trials

    Controlled Clin Trials

    (1998)
  • M.B. Smith

    Editorial

    J Abnorm Social Psychol

    (1956)
  • T. Sterling

    Publication decisions and their possible effects on inferences drawn from tests of significance—or vice versa

    Am Stat Assoc J

    (1959)
  • R.G. Smart

    The importance of negative results in psychological research

    Can Psychol

    (1964)
  • M. Angell

    Negative studies

    N Engl J Med

    (1989)
  • R.G. Newcombe

    Towards a reduction in publication bias

    BMJ

    (1987)
  • S.J. Pocock et al.

    Statistical problems in the reporting of clinical trialsa survey of three medical journals

    N Engl J Med

    (1987)
  • S.J. Green et al.

    Effects on overviews of early stopping rules for clinical trials

    Stat Med

    (1987)
  • P.N. Lee

    Problems in interpreting epidemiological data

  • C.B. Begg et al.

    Publication biasa problem in interpreting medical data

    J R Stat Soc A

    (1988)
  • G.H. Givens et al.

    Publication bias in meta-analysisA Bayesian data-augmentation approach to account for issues exemplified in the passive smoking debate

    Stat Sci

    (1997)
  • R.I. Engler et al.

    Misrepresentation and responsibility in medical research

    N Engl J Med

    (1983)
  • R. Rosenthal et al.

    Psychology of the scientistV. Three experiments in experimenter bias

    Psychol Rep

    (1963)
  • R. Rosenthal

    Experimenter outcome-orientation and the results of the psychological experiment

    Psychol Bull

    (1964)
  • T.C. Chalmers et al.

    Meta-analysis of clinical trials as a scientific discipline. IControl of bias and comparison with large co-operative trials

    Stat Med

    (1987)
  • K. Dickersin

    The existence of publication bias and risk factors for its occurrence

    JAMA

    (1990)
  • J. Kleijnen et al.

    Review articles and publication bias

    Arzneimittelforschung

    (1992)
  • Minerva

    Re publication bias

    BMJ

    (1992)
  • R.J. Light

    Accumulating evidence from independent studieswhat we can win and what we can lose

    Stat Med

    (1987)
  • C.B. Begg

    A measure to aid in the interpretation of published clinical trials

    Stat Med

    (1985)
  • T.C. Chalmers et al.

    Minimizing the three stages of publication bias

    JAMA

    (1990)
  • B. Charlton

    Think negativescience needs its failures

    New Scientist

    (1987)
  • M.J. Mahoney

    Publication prejudicesan experimental study of confirmatory bias in the peer review system

    Cog Ther Res

    (1977)
  • L.A. Bero et al.

    Sponsored symposia on environmental tobacco smoke

    JAMA

    (1994)
  • Cited by (0)

    View full text