Elsevier

Journal of Informetrics

Volume 5, Issue 1, January 2011, Pages 37-47
Journal of Informetrics

Towards a new crown indicator: Some theoretical considerations

https://doi.org/10.1016/j.joi.2010.08.001Get rights and content

Abstract

The crown indicator is a well-known bibliometric indicator of research performance developed by our institute. The indicator aims to normalize citation counts for differences among fields. We critically examine the theoretical basis of the normalization mechanism applied in the crown indicator. We also make a comparison with an alternative normalization mechanism. The alternative mechanism turns out to have more satisfactory properties than the mechanism applied in the crown indicator. In particular, the alternative mechanism has a so-called consistency property. The mechanism applied in the crown indicator lacks this important property. As a consequence of our findings, we are currently moving towards a new crown indicator, which relies on the alternative normalization mechanism.

Introduction

It is well known that in some scientific fields the average number of citations per publication (within a certain time period) is much higher than in other scientific fields. This is due to differences among fields in the average number of cited references per publication, the average age of cited references, and the degree to which references from other fields are cited. In addition, bibliographic databases such as Web of Science and Scopus cover some fields more extensively than others (e.g., Moed, 2005). Clearly, other things equal, one will find a higher average number of citations per publication in fields with a high database coverage than in fields with a low database coverage.

In citation-based research performance evaluations, it is crucial that one carefully controls for the above-mentioned differences among fields. This is especially the case for performance evaluations at higher levels of aggregation, such as at the level of countries, universities, or multi-disciplinary research groups. In performance evaluation studies, our institute, the Centre for Science and Technology Studies (CWTS) of Leiden University, uses a standard set of bibliometric indicators (Van Raan, 2005). Our best-known indicator, which we usually refer to as the crown indicator, relies on a normalization mechanism that aims to correct for the above-mentioned differences among fields. An indicator similar to the crown indicator is used by the Centre for R&D Monitoring (ECOOM) in Leuven, Belgium. ECOOM calls its indicator the normalized mean citation rate (e.g., Glänzel, Thijs, Schubert, & Debackere, 2009).

The normalization mechanism of the crown indicator basically works as follows. Given a set of publications, we count for each publication the number of citations it has received. We also determine for each publication its expected number of citations. The expected number of citations of a publication equals the average number of citations of all publications of the same document type (i.e., article, letter, or review) published in the same field and in the same year. To obtain the crown indicator, we divide the sum of the actual number of citations of all publications by the sum of the expected number of citations of all publications.

The normalization mechanism of the crown indicator has been criticized by Lundberg (2007) and by Opthof and Leydesdorff (2010).1 These authors have argued in favor of an alternative mechanism. According to the alternative mechanism, one first calculates for each publication the ratio of its actual number of citations and its expected number of citations and one then takes the average of the ratios that one has obtained. Lundberg refers to an indicator that uses this mechanism as the item-oriented field-normalized citation score average. This indicator is used by Karolinska Institutet in Sweden (Rehn & Kronman, 2008). Similar indicators are used by Science-Metrix in the US and Canada (e.g., Campbell et al., 2008, p. 12), by the SCImago research group in Spain (SCImago Research Group, 2009), and by Wageningen University in the Netherlands (Van Veller, Gerritsma, Van der Togt, Leon, & Van Zeist, 2009). Sandström also used a similar indicator in various bibliometric studies (e.g., Sandström, 2009, p. 33–34).

In this paper, we present a theoretical comparison between the normalization mechanism of the crown indicator and the alternative normalization mechanism discussed by Lundberg (2007) and others. We first consider two fictitious examples that provide some insight into the difference between the mechanisms. We then study the consistency (Waltman and Van Eck, 2009a, Waltman and Van Eck, 2009b) of the mechanisms. We also pay some attention to the way in which overlapping fields should be handled. The main finding of the paper is that the alternative normalization mechanism has a more solid theoretical basis than the normalization mechanism currently applied in the crown indicator. As a consequence of this finding, CWTS is currently moving towards a new crown indicator, which relies on the alternative mechanism. For an extensive empirical comparison between the two normalization mechanisms, we refer to Waltman et al. (2010).

Section snippets

Definitions of indicators

In this section, we provide formal mathematical definitions of the CPP/FCSm indicator and of the MNCS indicator. The CPP/FCSm indicator has been used as the crown indicator of CWTS for more than a decade. The MNCS indicator, where MNCS is an acronym for mean normalized citation score, is the new crown indicator that CWTS is planning to adopt. The two indicators differ from each other in the normalization mechanism they use. Throughout this paper, we focus on the issue of normalization for

Example 1

The following fictitious example provides some insight into the difference between the CPP/FCSm indicator and the MNCS indicator. Suppose we want to compare the performance of two research groups, research group A and research group B. Both research groups are active in the same field. This field consists of two subfields, subfield X and subfield Y. Research groups A and B have the same number of publications, and they both have half of their publications in subfield X and half of their

Example 2

We now turn to another fictitious example that demonstrates the difference between the CPP/FCSm indicator and the MNCS indicator. The example also illustrates the possible policy relevant consequences of the difference between the indicators. Suppose the faculty of natural sciences of some university finds itself in the following situation. The faculty is doing research in two broad fields, chemistry and physics. (For simplicity, we do not break down these fields into subfields.) When

Consistency of indicators

In this section, we study the consistency of our indicators of interest. Consistency is a mathematical property that a bibliometric indicator may or may not have. In earlier research (Waltman and Van Eck, 2009a, Waltman and Van Eck, 2009b), it was pointed out that the well-known h-index (Hirsch, 2005) does not have the property of consistency.

We first introduce some mathematical notation. For our purpose, a publication can be represented by an ordered pair (c,e), where c and e denote,

How to handle overlapping fields?

In the previous sections, we have shown that the MNCS indicator has attractive theoretical properties. In this section, we therefore focus exclusively on the MNCS indicator. We study how the indicator should be calculated in the case of overlapping fields.

A nice property that we would like the MNCS indicator to have is that the indicator has a value of one when calculated for the set of all publications published in all fields. If there are no publications that belong to more than one field, it

Discussion and conclusion

We have presented a theoretical comparison between two normalization mechanisms for bibliometric indicators of research performance (for an empirical comparison, see Waltman et al., 2010). One normalization mechanism is implemented in the CPP/FCSm indicator, also referred to at CWTS as the crown indicator. The other normalization mechanism is implemented in what we call the MNCS indicator. The examples that we have given show that the CPP/FCSm indicator sometimes yields counterintuitive

Acknowledgment

We would like to thank Wolfgang Glänzel for his comments on earlier drafts of this paper.

References (37)

  • Bouyssou, D., & Marchant, T. (in press). Bibliometric rankings of journals based on impact factors: An axiomatic...
  • T. Braun et al.

    United Germany: The new scientific superpower?

    Scientometrics

    (1990)
  • D. Campbell et al.

    Benchmarking of Canadian Genomics—1996-2007

    (2008)
  • CWTS (n.d.). The Leiden ranking, 2008. Retrieved August 16, 2010, from...
  • R.E. De Bruin et al.

    A study of research evaluation and planning: The University of Ghent

    Research Evaluation

    (1993)
  • L. Egghe et al.

    Averaging and globalising quotients of informetric and scientometric data

    Journal of Information Science

    (1996)
  • L. Egghe et al.

    Average and global impact of a set of journals

    Scientometrics

    (1996)
  • W. Glänzel et al.

    Subfield-specific normalized relative indicators and a new generation of relational charts: Methodological foundations illustrated on the assessment of institutional research performance

    Scientometrics

    (2009)
  • Cited by (354)

    View all citing articles on Scopus
    View full text