Bibliometric rankings of journals based on Impact Factors: An axiomatic approach☆
Introduction
The traditional way to evaluate research is to rely on peer judgement. Because this evaluation technique is costly and may suffer from a number of problems (Campanario, 1998a, Campanario, 1998b), the bibliometric literature has developed alternative tools, mainly based on various ways of counting citations (Garfield, 1955, Garfield, 1972, Garfield, 1979). As noted by van Raan (2005, p. 2), this “unavoidably introduces a bibliometrically limited view of a complex reality”.
Among the numerous bibliometric indices that have been proposed in the literature, Impact Factors (IFs) of journals stand out as being among the oldest and the most widely used when it comes to evaluate journals (recent references using IFs for ranking journals include Bar-Ilan, 2010, Franceschet, 2010). Glänzel and Moed (2002, p. 172) describe IFs as “a fundamental citation-based measure for significance and performance of scientific journals”. We refer the reader to Glänzel and Moed (2002) for a thorough overview on IFs. Archambault and Lariviére (2009) and Garfield (2006) detail the history and origins of IFs. Roughly speaking, the IF of a journal gives the mean number of citations received by papers published in this journal.
IFs are not the only bibliometric indices that have been introduced in the literature. Recent years have seen a flourishing of such indices, including, e.g., the h-index (Hirsch, 2005) and the g-index (Egghe, 2006). These new indices have almost immediately after their introduction been studied from an axiomatic standpoint (Marchant, 2009a, Marchant, 2009b, Quesada, 2009, Rousseau, 2008b, Woeginger, 2008a, Woeginger, 2008b, Woeginger, 2008c). Curiously, this axiomatic literature has not paid much attention to IFs, although they are much older and more widely used.
An exception is Bouyssou and Marchant (2010) who studied the problem of consistently ranking authors and journals. The study of consistent rankings of authors and journals requires a rather rich framework that includes three different sets (authors, papers, and journals) and three different binary relations (indicating authorship, citations, and publication media). Within this framework, we presented axioms implying that journals are ranked according to IFs and authors according to the number of citations received by the papers they have signed (with an eventual correction for co-authors). This analysis uses many axioms because of the richness of the framework. Furthermore, because axioms related to journals and axioms related to authors are linked by a consistency condition, they interact. It is therefore not easy to derive from the results in Bouyssou and Marchant (2010) what are the conditions needed to characterize the ranking of journals using IFs, independently of what happens with authors. This is the purpose of the present paper. We will use a simple framework that only involves journals and present a set of conditions implying that journals are ranked using IFs.
Our present treatment of the ranking of journals based on IFs rests on a fairly simple intuition. In order to compute the IF of a journal, it is only necessary to know how many papers published in this journal have received x citations, for all integer , i.e., the distribution of citations for the journal. Comparing distributions of citations bears a striking resemblance with the problem of comparing probability distributions on consequences, a classical problem in decision theory. Exploiting this similarity will allow us to provide a simple axiomatic foundation to the ranking of journals based on IFs. This will also lead us to suggest alternative rankings that use generalizations of IFs. On the technical side, while Marchant (2009b) emphasizes the power of results on “extensive measurement” (Krantz, Luce, Suppes, & Tversky, 1971) to deal with the ranking of authors, we will show that results on “expected utility” (Fishburn, 1970) are much relevant to analyze the ranking of journals using IFs. We hope that this will shed a new light on the discussion concerning the pros and cons of this bibliometric ranking of journals by making explicit the conditions underlying it.
This paper is organized as follows. Section 2 introduces our framework and notation. Section 3 presents the main conditions used in the paper. Section 4 characterizes a class of bibliometric rankings of journals that includes the one based on IFs as a particular case. Section 5 specializes this analysis to characterize the ranking based on IFs. Section 6 concludes. Appendix A illustrates some of our findings using citation data for a small sample of economic journals.
Section snippets
Notation and definitions
A published paper may receive citations from other papers. We take here these citations as given and we do not discuss the various ways in which such citations can or should be computed (e.g., what is the set of journals that should be included in the database, what is the relevant time window to collect citations, how should we deal with self-citations, etc.). The only aspect of papers that is taken into account in our analysis is the number of citations (possibly zero) that they receive.
We
Axioms
Let ≿ be a binary relation on the set of all journals.1 We interpret ≿ as an “at least as good as” preference relation between journals (throughout the paper, we use the term “preference” to indicate what some readers may prefer to call “impact”). The relation ≻ denotes the asymmetric part of ≿, i.e., the binary relation on such that a ≻ b if [a ≿ b and Not[b ≿ a]]. We interpret ≻ as a “strict preference” relation between
Rankings using Generalized Impact Factors
In this section, we consider a family of bibliometric rankings that are very close to ≿IF except that the value of a paper having received x citations is computed using an increasing function u. This leads to what we call Generalized Impact Factors (GIFs).
Ranking using Impact Factors
This section investigates what must be added to the conditions used in Theorem 1 to guarantee that the function u becomes an affine function, implying that model (2) reduces to model (1).
Consider a journal . Suppose that a paper published in a receives an additional citation. When journals are compared using IFs, it does not matter which of the papers published in a receives the additional citation. An additional citation to any paper published in a will increase IF(a) by 1/N(a). This is
Discussion
This paper has analyzed the ranking of journals based on IFs using an axiomatic approach. We have given necessary and sufficient conditions (Theorem 2) for a binary relation comparing journals to coincide with the relation induced by IFs. We have also analyzed a general class of indices, called Generalized Impact Factors (GIFs), in which distributions of citations having equal means are not necessarily considered as indifferent (Theorem 1). This family of indices uses a function u that allows
References (59)
Informetrics at the beginning of the 21st century—A review
Journal of Informetrics
(2008)Ranking of information and library science journals by JIF and by h-type indices
Journal of Informetrics
(2010)- et al.
Consistent bibliometric rankings of authors and journals
Journal of Informetrics
(2010) Journal influence factors
Journal of Informetrics
(2010)Monotonicity and the Hirsch index
Journal of Informetrics
(2009)- et al.
Increasing risk: I. A definition
Journal of Economic Theory
(1970) Woeginger’s axiomatisation of the h-index and its relation to the g-index, the h(2)-index and the r2-index
Journal of Informetrics
(2008)Utility theory based on rational probabilities
Journal of Mathematical Economics
(1980)- et al.
Citation to scientific articles: Its distribution and dependence on the article features
Journal of Informetrics
(2010) An axiomatic analysis of Egghe’s g-index
Journal of Informetrics
(2008)
An axiomatic characterization of the Hirsch-index
Mathematical Social Sciences
A symmetry axiom for scientific impact indices
Journal of Informetrics
Citing-side normalization of journal impact: A robust variant of the Audience factor
Journal of Informetrics
Citation statistics. A report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the International Council of Institute of Mathematical Statistics (IMS)
Statistical Science
History of the journal impact factor: Contingencies and consequences
Scientometrics
Aspects of the theory of risk-bearing. Academic Bookstore, Yrjö Jahnsson Foundation, Helsinki reprinted under the title “Essays in the theory of risk-bearing”
Peer review for journals as it stands today, part 1
Science Communication
Peer review for journals as it stands today, part 2
Science Communication
Dominance relations and universities ranking. Cahier du GREThA 2009-02 GREThA
Power laws in the information production process: Lotkaian informetrics
Theory and practice of the g-index
Scientometrics
Utility theory for decision-making
Nonlinear preference and utility theory
Nontransitive preferences in decision theory
Journal of Risk and Uncertainty
Theoretical foundations of stochastic dominance
Citation indexes to science: A new dimension in documentation through the association of ideas
Science
Citation analysis as a tool in journal evaluation
Science
Is citation analysis a legitimate evaluation tool?
Scientometrics
The history and meaning of the journal impact factor
Journal of the American Medical Association
Cited by (50)
What we learn is what we earn from sustainable and circular construction
2023, Journal of Cleaner ProductionCitation Excerpt :This approach summarizes the critical attributes of existing knowledge, milestone articles, and influential authors, institutions, and journals in order to understand a given research area thoroughly and anticipate future trends in it (Zhang and Liang, 2020). As such, it identifies the patterns of evolution in a specific area or field, and it assesses and enumerates the scientific quality and impact of research contributions (Bouyssou and Marchant, 2011). From a practical point of view, bibliometric and network analyses mobilize diverse mathematical and statistical techniques of bibliography counting, that evaluate a large pool of bibliographical information published over long periods of time, which is highly important for organizing information in a particular thematic field (Shashi et al., 2021a).
Let us talk about something: The evolution of e-WOM from the past to the future
2022, Journal of Business ResearchCitation Excerpt :First, according to scholars (Daim et al., 2006; Ruggeri et al., 2019; Yin et al., 2020), it avoids ignoring valuable content and enables the analysis of a large body of literature. Also, mapping the scope and structure of the domain by network analysis of previous works, recognizing the most credible research, and detecting key research clusters in a more precise and objective manner is considerable (Bouyssou & Marchant, 2011; Fahimnia et al., 2015; Zha et al., 2021). Moreover, due to the time-specified nature of papers, bibliometric analysis can reveal conceptual evolution through tracking the interactive relationships between subjects in sequential time slices (Merigó et al., 2018; Yin et al., 2020).
Task recommendation in crowdsourcing systems: A bibliometric analysis
2020, Technology in SocietyJointly valuing journal visibility and author citation count: An axiomatic approach
2020, Journal of InformetricsThe value and credits of n-authors publications
2019, Journal of InformetricsFood and gastronomy research in tourism and hospitality: A bibliometric analysis
2018, International Journal of Hospitality ManagementCitation Excerpt :To address these issues, bibliometric analysis as a quantitative approach of systematic review procedure was conducted to investigate the volume of research and the knowledge domain of food and gastronomy topics in leading H&T journals. Using this method, evolution in each discipline was measured (Bouyssou and Marchant, 2011), including includes years, predominant themes, specific university contributions to the field, the number of authors of published articles, and the methods used in published scientific work (Koseoglu et al., 2016; De Bakker et al., 2005). The study organized as follows.
- ☆
Authors are listed alphabetically. During the preparation of this paper, Thierry Marchant benefited from a visiting position at the Université Paris Dauphine, Paris, and the Indian Statistical Institute, New Delhi. The work of Denis Bouyssou was partially supported by a grant from the Agence Nationale de la Recherche (Project ComSoc, ANR-09-BLAN-0305). This support is gratefully acknowledged. We would like to thank Ludo Waltman and an anonymous referee for their very useful comments on earlier drafts of this paper.