Skip to main content
Free AccessEditorial

Judging a Journal by the Impact Factor: Is It Appropriate and Fair for Assessment Journals?

Published Online:https://doi.org/10.1027/1015-5759/a000031

Nowadays the impact factor is regarded as an important indicator of the quality of a scientific journal. Although its utility is not uncontested (Stegmann, 1997), there seems to be a consensus that it is “a proxy for the relative importance of a journal within its field, with journals with higher impact factors deemed to be more important than those with lower ones” (http://en.wikipedia.org/wiki/Impact_factor). Considered from a broad perspective, the impact factor is merely a statistic that reflects the average number of citations of articles in a journal during the two years following their publication.

In a way the impact factor is a very reasonable statistic, since a scientific journal should not be self-serving but be a means of communication within the scientific community. An article published in a scientific journal should provide new results concerning a relevant research problem and thus contribute to the scientific debate that focuses on this research problem. Alternatively, an article may present methodological or technical innovations that open up the opportunity to arrive at more appropriate results for the research problem. In either case other researchers should become aware of the article and see the necessity to consider it in their own research work. This mechanism is suggested by the idea of science as discourse of researchers (Bereiter, 1994), and it is an important component of scientific processes according to the constructivist approach in science (Westmeyer, 1996).

In recent years the impact factor of European Journal of Psychological Assessment(EJPA) has been greater than 1.0 (2007 = 1.095; 2008 = 1.262; 2009 = 1.561), following a steady rise over a period of years. Furthermore, comparison of the impact factor of this journal with the impact factors of other assessment journals indicates that it is at a very reasonable level (for a detailed comparison, see an article in this issue: Alonso-Arbiol & Van de Vijver, 2010).

The positive development of European Journal of Psychological Assessment’s impact factor in the past gives rise to an expectation that it may go on in this direction in the future. However, is this expectation reasonable? Is there no upper limit? Can the European Journal of Psychological Assessment in the long run compete with high-prestige journals like Nature and Science?

We can take various approaches in the search for answers to these questions. One is to classify journals according to the type of scientific “news” that they publish, since different audiences are addressed and the members of these audiences will differ in terms of their likelihood of publishing new articles and including references. I’d like to distinguish three types of scientific “news” here:

  • Substantive news (concerning a research problem);
  • Methodological news (concerning research designs and statistical methods);
  • News concerning assessment tools (all the means used to collect empirical data, ranging from simple rating scales to tomographs).

Substantive news is addressed to researchers who are currently concerned with the corresponding research problem. Methodological news is important for researchers concerned with the same methodological problem and for applied researchers who may profit from useful innovations. News concerning an assessment tool is of interest to researchers who plan to conduct a new study that might profit from a new or revised assessment tool or more up to date information concerning an existing assessment tool.

It is quite obvious that an important difference between the three types of scientific “news” is the timing of their subsequent citations. Substantive news can be cited almost immediately and thus have an almost immediate effect on the impact factor. News concerning assessment tools, by contracts, may be subject to a considerable time lag in terms of when other researchers start writing about it and citing it. This lag is crucial, since the impact factor is computed on the basis of citations during the 2 years following the publication of an article.

A publication time-lag model can be used to consider the implications: such a model subdivides the time period between the decision to conduct a specific research project and the appearance of results as journal articles into five phases:

  1. 1.
    Conceptualization and the generation of a research proposal
  2. 2.
    Preparing the facilities for data collection
  3. 3.
    Data collection
  4. 4.
    Data analysis and manuscript writing
  5. 5.
    Reviewing and awaiting publication after acceptance

In large, empirical, psychology research projects, phases 1–4 may well take 2 or 3 years and phase 5 may last 1.5–2 years. According to this model, only scientific news that is incorporated into the research project during phase 4 (data analysis and manuscript writing) has a good chance of becoming a citation relevant for the crucial 2-year period. By contrast, if the news is incorporated into the research project during phase 2 (preparing data collection) and finds its way into the published research results, the corresponding citation will miss the 2-year period that is crucial for the impact factor. So only substantive news and, to a limited degree, methodological news (since this may prove useful during data analysis and the evaluation of results) can be cited within the relevant 2-year period and thus have an effect on the impact factor. By contrast, news concerning assessment tools will not normally be cited within this crucial period, since it would normally have to be available during either the first or second phases of a research project in order to be considered and thus be referenced.

It is this publication time-lag model, in combination with the different types of scientific news, that is most important in evaluating the prospect for impact factor changes in the European Journal of Psychological Assessment and other assessment journals that give preference to the publication of empirical studies (De Bruyn, 2008). The majority of articles published in assessment journals report news concerning assessment tools. And timely (in the sense of the impact factor) citations of articles concerning assessment devices can basically only be expected from the authors themselves. Only the authors know about their new, adapted, or revised assessment tools and can use them (and thus cite them within the 2-year period) in other research projects. The rest of the scientific community will only be able to cite articles about assessment tools after the relevant 2-year period. By contrast, timely citations (in the sense of the impact factor) are much more likely for articles that report substantive or methodological news.

Consequently, an assessment journal like the European Journal of Psychological Assessment is at a disadvantage when it comes to achieving a high impact factor, because articles that can be classified as substantive news are relatively rare. Screening the contents of the last 2 years, for instance, yields a few articles that provide substantive news in addition to news about technical matters. These articles concerned thinking styles (Kuhn & Holling, 2009; Witteman, Van den Bercken, Claes, & Godoy, 2009), working memory (Alloway, 2009), the elaboration of clinical and developmental concepts such as alcohol and substance misuse (Zaldívar, Molina, López Rios, & García Montes, 2009), attachment (Dewitte, De Houwer, & Buysse, 2008), and fatness (Ambwani, Warren, Gleaves, Cepeda-Benito, & Fernandez, 2008).

Methodological papers have a better chance of getting into an assessment journal than do substantive ones, since they clearly supplement papers concerning assessment tools. Therefore, it is no surprise that the issues of European Journal of Psychological Assessment of the last two years actually include articles that can be categorized as dealing with methodology, such as presentations of special methods for data analysis on the basis of applied examples. Examples include mixture-distribution Rasch analysis (Meiser, & Machunsky, 2008), unfolding and dominance models (Weekers, & Meijer, 2008), application of the IRT model to data showing distortion due to different item wordings (Rauch, Schweizer, & Moosbrugger, 2008). Moreover, we have also published research reports concerning general assessment methods, such as confidence judgment (Stankov, Lee, Paek, 2009), interviews (Lievens, & Peeters, 2008), and attention assessment in general (Krumm, Schmidt-Atzert, & Eschert, 2008).

It is sometimes rumored that articles published in special issues can receive a large number of citations. One can reasonably speculate whether it is the curiosity of the audience that is stimulated to a high degree by a special issue; or whether the authors are more aware of the other authors’ articles and consider them in their future work. During the last 2 years, only one special issue was published in European Journal of Psychological Assessment (Dewitte, De Houwer, and Buysse, 2008; Gawronski, Deutsch, LeBel, & Peters, 2008; Gschwendner, Hofmann, & Schmitt, 2008; Nosek, & Hansen, 2008; Richetin, & Perugini, 2008; Rudolph, Schröder-Abe, Schütz, Gregg, & Sedikides, 2008; Schmuckle, Back, & Egloff, 2008; Schnabel, Asendorpf, & Greenwald, 2008; Teige-Mocigemba, Klauer, & Rothermund, 2008). However, in this case it is not clear whether it is likely to lead to a noticeable increase in the impact factor since the issue basically includes news concerning assessment tools.

Since the majority of articles (85%) published in European Journal of Psychological Assessment concern assessment tools, the likelihood of being referenced during the crucial period is relatively low – and so the likelihood of a further increase in the impact factor is arguable. Additionally, we can also conclude that the 2-year version of the impact factor is not really appropriate for an assessment journal like European Journal of Psychological Assessment. This type of journal is simply at an inherent disadvantage because of the type of content. There is also a 5-year version of the impact factor that would be much more appropriate in the case of European Journal of Psychological Assessment. Perhaps we should pay more attention to it in the future.

References

  • Alloway, T. P. (2009). Working memory, but not IQ, predicts subsequent learning in children with learning difficulties. European Journal of Psychological Assessment, 25, 92–98. First citation in articleLinkGoogle Scholar

  • Alonso-Arbiol, I. , van de Vijver, F. J. R. (2010). A historical analysis of the European Journal of Psychological Assessment: A comparison of the earliest (1992–1996) and the latest years (2005–2009). European Journal of Psychological Assessment, 26, 238–247. First citation in articleLinkGoogle Scholar

  • Ambwani, S. , Warren, C. S. , Gleaves, D. H. , Cepeda-Benito, A. , Fernandez, M. C. (2008). Culture, gender, and assessment of fear of fatness. European Journal of Psychological Assessment, 24, 81–87. First citation in articleLinkGoogle Scholar

  • Bereiter, C. (1994). Implications of postmodernism for science, or, science as progressive discourse. Educational Psychologist, 29, 3–12. First citation in articleCrossrefGoogle Scholar

  • De Bruyn, E. (2008). About the journal: Editorial board and policy matters. European Journal of Psychological Assessment, 24, 79–80. First citation in articleLinkGoogle Scholar

  • Dewitte, M. , De Houwer, J. , Buysse, A. (2008). On the role of the implicit self-concept in adult attachment. European Journal of Psychological Assessment, 24, 282–289. First citation in articleLinkGoogle Scholar

  • Gawronski, B. , Deutsch, R. , LeBel, E. P. , Peters, K. R. (2008). Response interference as a mechanism underlying implicit measures: Some traps and gaps in the assessment of mental associations with experimental paradigms. European Journal of Psychological Assessment, 24, 218–225. First citation in articleLinkGoogle Scholar

  • Gschwendner, T. , Hofmann, W. , Schmitt, M. (2008). Convergent and predictive validity of implicit and explicit anxiety measures as a foundation of specificity and content similarity. European Journal of Psychological Assessment, 24, 254–262. First citation in articleLinkGoogle Scholar

  • Krumm, S. , Schmidt-Atzert, L. , Eschert, S. (2008). Investigating the structure of attention: How do test characteristics of paper-pencil sustained attention tests influence their relationship with other attention tests? European Journal of Psychological Assessment, 24, 108–116. First citation in articleLinkGoogle Scholar

  • Kuhn, J.-T. , Holling, H. (2009). Measurement invariance of divergent thinking across gender, age, and school forms. European Journal of Psychological Assessment, 25, 1–7. First citation in articleLinkGoogle Scholar

  • Lievens, P. , Peeters, H. (2008). Interviewers’ sensitivity to impression management tactics in structured interviews. European Journal of Psychological Assessment, 24, 174–180. First citation in articleLinkGoogle Scholar

  • Meiser, T. , Machunsky, M. (2008). The Personal Structure and Personal Need for Structure: A mixture-distribution Rasch analysis. European Journal of Psychological Assessment, 24, 27–34. First citation in articleLinkGoogle Scholar

  • Nosek, B. A. , Hansen, J. J. (2008). Personalizing the implicit association test increases explicit evaluation of target concepts. European Journal of Psychological Assessment, 24, 226–236. First citation in articleLinkGoogle Scholar

  • Rauch, W. , Schweizer, K. , Moosbrugger, H. (2008). An IRT analysis of the personal optimism scale. European Journal of Psychological Assessment, 24, 49–56. First citation in articleLinkGoogle Scholar

  • Richetin, J. , Perugini, M. (2008). When temporal contiguity matters: A moderator of the predictive validity of implicit measures. European Journal of Psychological Assessment, 24, 246–253. First citation in articleLinkGoogle Scholar

  • Rudolph, A. , Schröder-Abe, M. , Schütz, A. , Gregg, A. P. , Sedikides, C. (2008) Through a glass, less darkly? Reassessing convergent and discriminant validity in measures of implicit self-esteem. European Journal of Psychological Assessment, 24, 273–281. First citation in articleLinkGoogle Scholar

  • Schmuckle, S. C. , Back, M. D. , Egloff, B. (2008). Validity of the five-factor model for the implicit self-concept of personality. European Journal of Psychological Assessment, 24, 263–272. First citation in articleLinkGoogle Scholar

  • Schnabel, K. , Asendorpf, J. B. , Greenwald, A. G. (2008). Assessment of individual differences in implicit cognition: A review of IAT measures. European Journal of Psychological Assessment, 24, 282–289. First citation in articleLinkGoogle Scholar

  • Stankov, L. , Lee, J. , Paek, I. (2009). Realism of confidence judgments. European Journal of Psychological Assessment, 25, 123–130. First citation in articleLinkGoogle Scholar

  • Stegmann, J. (1997). How to evaluate journal impact factors. Nature, 390, 550. First citation in articleCrossrefGoogle Scholar

  • Teige-Mocigemba, S. , Klauer, K. C. , Rothermund, K. (2008). Minimizing method-specific variance in the IAT: A single block IAT. European Journal of Psychological Assessment, 24, 237–245. First citation in articleLinkGoogle Scholar

  • Weekers, A. M. , Meijer, R. R. (2008). Scaling response processes on the on personality items using unfolding and dominance models: An illustration with a Dutch Dominance and Unfolding Personality Inventory. European Journal of Psychological Assessment, 24, 65–77. First citation in articleLinkGoogle Scholar

  • Westmeyer, H. (1996). The constructionist approach to psychological assessment: Problems and prospects. In W. Battmann, Dutke, S., (Eds.), Processes of the molar regulation of behavior (pp. 309–325). Lengerich, Germany: Pabst. First citation in articleGoogle Scholar

  • Witteman, C. , Van den Bercken, J. , Claes, L. , Godoy, A. (2009). Assessing rational and intuitive thinking styles. European Journal of Psychological Assessment, 25, 39–47. First citation in articleLinkGoogle Scholar

  • Zaldívar, F. , Molina, A. M. , López Rios, F. , & García Montes, J. M. (2009). Evaluation of alcohol and other drug use and the influence of social desirability: Direct and camouflaged measures. European Journal of Psychological Assessment, 25, 244–251. First citation in articleLinkGoogle Scholar

Karl Schweizer, Department of Psychology, Goethe University Frankfurt, Mertonstr. 17, D-60054 Frankfurt a.M., Germany, +49 69 798-22081, +49 69 798-23847,