Fifteen years ago many editors and academics had never heard of impact factors. Now they are obsessed with them. When I was first editor of the BMJ in 1991 I would attend the editorial boards of our dozen specialist journals—Gut and Thorax, for example—and present data on the journals' impact factors. Usually nobody had heard of impact factors. I explained what they were—and people yawned. Now editors break open bottles of champagne if their impact factor rises by a tenth of a decimal point or burst into tears if it falls. They build their editorial strategies around increasing their impact factors. Authors, meanwhile, can quote the impact factors of the major journals and use them when deciding where to submit their papers. What is this thing called the impact factor? Why does it have such power? And is it a blessing or a curse?

The impact factor was first mentioned by its inventor, Eugene Garfield, in Science in 1955.1 He proposed that a system should be devised for an original scientific paper that ‘would provide a complete listing … of all the original articles that had referred to the article in question’. The law had been doing something similar since 1873. Garfield saw many uses for the citation index, but his prime aim was to ‘eliminate the uncritical citation of fraudulent, incomplete, or obsolete data by making it possible for the conscientious scholar to be aware of criticism of earlier papers’. He began his article with a quote that ‘The uncritical citation of disputed data … is a serious matter … Buried in scholarly journals, critical notes are increasingly likely to be overlooked with the passage of time, while the studies to which they pertain, having been reported more widely, are apt to be rediscovered.’

I find it ironic to read these words half a century after they were written because I fear that the impact factor that was born in this article has done little to reduce the citation of fraudulent data and may well have encouraged such citations. Several studies have shown that retracted articles continue to be cited.2,3 One recent study of 211 retracted articles published between 1996 and 2000 found that a third of their citations occurred after the articles were retracted.3 Of the 137 citations only five were negative: the vast majority cited the work affirmatively. To add to our distress a recent article in Science has shown that many studies that are proved to be fraudulent are not even retracted.4

So citation analysis has not achieved Garfield's original goal—and it may have made it worse. The importance of impact factors to authors and editors encourages indiscriminate citation. Authors cite themselves and each other in ‘citation cartels’ in order to boost their impact factors, and some editors require authors to cite work in their journals in order to increase the impact factor of the journal.5,6

Garfield thought too that the ‘impact factor,’ which he mentions in inverted commas for the first time on the second page of his article, would be ‘particularly useful in historical research, when one is trying to evaluate the significance of a particular work and its impact on the literature and thinking of the period.’ This is where the impact factor has had its major influence but as part of research assessment rather than as a tool of historical research.

Last September I heard Garfield, who was as sprightly as ever at 80, reflect on the history and the meaning of the impact factor. His talk was subsequently published in JAMA.7 He explained how his idea floated in Science led in 1961 to the publication of the Science Citation Index. The index provides both an ‘author impact factor’ and a ‘journal impact factor.’ Most people when referring to impact factors are referring to journal impact factors, and a major problem has arisen because an article published in a journal has been deemed by those assessing research to assume the impact factor of the journal. So an article published in the New England Journal of Medicine scores 38 and is worth more than five times an article published in the BMJ (impact factor 7). In fact, there is very little correlation between the impact factor of an article and the journal in which it is published, because the impact factor of journals is driven by a few papers that are highly cited.8

The academics who belonged to the editorial boards I spoke to 15 years ago have become obsessed with impact factors because of their importance in the British research assessment exercise and its equivalent in other countries. The exercise rewards those who score highly and takes away from those who score poorly. Doing well in the exercise is professional life and death to academics, and so they are interested in every aspect of the exercise—including impact factors. Ironically, the Higher Education Funding Council in Britain came to understand that it was assessing science in a fundamentally unscientific way by using the impact factor of journals as a surrogate for the impact of articles published in them. It thus told panels doing the assessment not to do so—but the habit has stuck, not least because it is so much easier than forming a judgement on the significance of an individual study.

Editors of scientific journals—always desperate for better studies—have recognized the importance that academics place on impact factors, and many have become obsessed with increasing the impact factor of their journal. No other measure matters. Material that does not attract citations must be ditched, and editors must search for material and ways that will increase the impact factor of their journals. The impact factor is calculated (to the absurdity of three decimal places) by dividing citations over 2 years by the number of citable articles published in those 2 years. One of the many substantial snags with the impact factor is that some journals—particularly general journals—publish a wide range of material: news, obituaries, book and television reviews, and much more. It is not clear what should be included in the denominator, and many editors have discovered that the best way to increase the impact factor of your journal is to persuade the Institute for Scientific Information, which compiles the impact factors, to exclude as much as possible from the denominator. By doing this editors can more than double the impact factors of their journals.

Malcolm Chiswick, at one time editor of Archives of Disease in Childhood, described how an obsession with impact factors can lead to what he termed an ‘impacted journal.’ Everything readable and entertaining is cut in favour of material that will be cited. This means that a journal is designed for citing rather than reading and for authors (who can cite articles) rather than readers (who cannot). In the case of medical journals this means that the needs of researchers are put before the needs of ordinary doctors, even though for many general medical journals ordinary doctors far outnumber researchers as readers. A journal's impact factor might rise but its readership declines.

So, has the impact factor conceived by Garfield all those years ago been a force for good or harm? Perhaps this is a meaningless question. Perhaps like many technologies—nuclear energy, the telephone, and the internet, for example—it has the potential for both good and harm. It is not the technology itself, it is how we use it. Accepting that, I still believe that we might have been better off if the impact factor had not been invented. Other, more intelligent and meaningful ways would have had to be used to assess research and journals. The story could, however, have been different if citation analyses had been used in the way Garfield imagined in that Science article—to avoid the citing of unreliable studies and to deepen historical understanding. Things went wrong, I believe, when the impact factor became a number. People, including scientists, credit numbers with an importance that they deny to words.

Garfield presciently ended his 1955 article with these two sentences: ‘The new bibliographic tool, like others that already exist, is just a starting point in literature research. It will help in many ways, but one should not expect it to solve all our problems’. Mistakenly, we asked the number to do too much.

References

1

Garfield E. Citation indexes for science: a new dimension in documentation through association of ideas.

Science
1955
;
122
:
108
–11. (Reprinted Int J Epidemiol 2006;
35
:1123–27)

2

Pfeifer MP, Snodgrass GL. The continued use of retracted, invalid scientific literature.

JAMA
1990
;
263
:
1420
–23.

3

Gabehart ME. An analysis of citations to retracted articles in the scientific literature. Available at: http://hdl.handle.net/1901/199 (accessed September 3, 2006).

4

Couzin J, Unger K. Cleaning up the paper trail.

Science
2006
;
312
:
38
–43.

5

Franck G. Scientific communication—a vanity fair?

Science
1999
;
286
:
53
–55.

6

Smith R. Journal accused of manipulating impact factor.

BMJ
1997
;
314
:
461
.

7

Garfield E. The history and meaning of the journal impact factor.

JAMA
2006
;
295
:
90
–93.

8

Seglen PO. Why the impact factor of journals should not be used for evaluating research.

BMJ
1997
;
314
:
497
.