Skip to main content
Free AccessEditorial

A Look Back and a Glimpse Forward

A Personal Exchange Between the Current and the Incoming Editor-in-Chief of EJPA

Published Online:https://doi.org/10.1027/1015-5759/a000387

After four interesting, educating, sometimes frustrating, but mostly enjoyable years Matthias Ziegler will hand over his role as Editor-in-Chief to Samuel Greiff with the beginning of 2017. Over the last months, the two of them have engaged in various discussions related to the field of assessment, the journal, its policies, and its mission. The following exchange occurred during one of their meetings in which they discussed the European Journal of Psychological Assessment and matters of assessment in general over a hot cup of coffee.

SG

Matthias, when you think back, what was the most rewarding experience during your 4-year editorship?

MZ

When I took over the role as Editor-in-Chief, I was not sure what would actually be waiting for me. In the beginning I was almost overwhelmed with the many choices I needed to make and the options I had. Do I accept a paper for review? Do I reject a paper based on the reviews? What are the corner stones for such a decision? What if I miss a rough diamond, which is then published in another journal? Fortunately, I had a detailed game plan (laughs). Anyway, the most rewarding aspect of my work probably was the opportunity of having a direct impact on one of my favorite research fields, psychological assessment, by influencing what is published. To be able to do this with such a great cast of associate editors made it even more rewarding. The final incentive was to see the papers that we had handled and helped to shape being cited. Our current impact factor1 bears witness to this constructive collaboration.

But Samuel, speaking of rewards, what motivates you to take over the role as editor-in-chief?

SG

Well, of course I also have a detailed game plan in the drawer but I expect that I will have to change it over and over again as the journal and also the community evolve over the next years. This is actually one of the major incentives that motivates me to take over the role of Editor-in-Chief – the constant challenge to develop the journal as the field of assessment changes. Over the last years I have gathered experience in handling manuscripts, mostly as Associate Editor in EJPA, and I have enjoyed the task associated with this role immensely. In a way, the journal is unique because the methodological aspects and the content related aspects are so closely intertwined. In fact, handling manuscripts as associate editor has brought me in touch with content areas that I wasn’t even aware of before, so there is often a learning experience involved for the Editors as well (MZ enthusiastically nods). Apart from this, the journal has great and committed Associated Editors, an extremely experienced board of Consulting Editors, and knowledgeable reviewers. It will be a great chance to carry the journal further as a service-oriented and high-quality outlet for the international assessment community.

So, I am sure to get in touch with many people, engage in a number of exchanges and find myself in unexpected situations over the next years. I would be interested to hear what the worst experience was from all that you have experienced?

MZ

I am not sure that I want to see that printed (frowns). However, the worst experiences always had to do with considerable delays in the review process. Some of the delays were not caused by us. For example, when we just couldn’t find reviewers for a paper. But I have to admit that some delays were clearly our fault. As an Editor-in-Chief it is important to monitor all papers under review, not just the ones you are handling personally. Let me just say that this is something I had to learn the hard way. And please, don’t get me wrong, this is not meant as a criticism of the Associate Editors. They are all doing a great job.

How about you, Samuel? Do you expect any difficulties and how are you planning to handle those?

SG

At this stage, I don’t even want to start thinking of all the problems that will come along the way, but rather keep the illusion of an entirely flawless enterprise (laughs). But I agree with you, ensuring high quality of the peer-review process is a challenge that sometimes I had to struggle with myself when handling papers. I remember one manuscript for which I had to approach about 25 potential reviewers to finally get the number of reviewers required. I recall me being quite desperate at some point and also authors’ patience was challenged for this manuscript. On a general level, I think the major challenge is to ensure excellent quality of the peer review process on the one hand and at the same time to offer professional and swift service to authors on the other hand. That’s why I believe that desk rejects will be an increasingly important aspect of the review process. Given that, across the field, the number of submissions is increasing – Where are we? At well over 300 annual submissions for EJPA? – sending out only those submission that have a fair chance of making it successfully through the review process saves resources for everybody involved and provides quick(er) decisions to authors. I hope that it will help authors to keep the journal in their mind as a high-quality and professional outlet.

Of course, rejections, of any type, are always a disappointment. I am well-experienced with them myself (sighs). To this end, authors are probably very interested to hear what they can do to maximize their chances. If you could give one advice to authors, what would it be?

MZ

Read our editorials! Seriously, there were so many papers we had to reject for the same reasons that we started to write editorials to make our game plan more transparent. In these editorials, we positioned ourselves regarding specific techniques (e.g., item selection in Ziegler, 2014a), asked for papers regarding specific problems (e.g., Ziegler, 2015) or announced general journal policies (e.g., Ziegler & Bensch, 2013). Following these editorials certainly is a good advice for authors. The most important advice I would give, though, is following the ABC of test construction (Ziegler, 2014b). The reader just needs to know what the measure presented in a paper is supposed to measure, what the intended uses for the measure are, and in which populations it is supposed to be used. Answers to those three questions directly inform the validation process and are the basis for a fair and transparent evaluation of a paper. Thus, make your life, the editor’s life, and the reviewers’ lives easier!

And, Samuel, what should authors expect from you? What is your advice right now?

SG

First of all, I will definitely continue the tradition of publishing editorials in which important information is given or relevant statements are made. They are a great means for an editor to communicate a journal’s policy to readers and authors. In fact, there will be an inaugural editorial in the next issue 01/2017. There, we will announce some news about the way the journal is working such as some changes in the team of Associate Editors and the way Associate Editors and the Editor-in-Chief work together. We will also make transparent statements on which type of submissions are likely to lead to desk rejections – so, everybody: stay tuned for the next editorial. As I just mentioned, with regard to the publication process, authors should expect to see more desk rejections as a means to make quick decisions and to have more resources available for those manuscripts that enter the peer-review process. On a content level, not so much will change. Just as you have in the past, Matthias, I consider the mission of the journal as a broad one and submissions from diverse content areas are very much welcome. Their common umbrella is the excellent methodological level and the state-of-the-art approach to assessment. Personally, I like to see papers on innovative methods or innovative assessment instruments and encourage authors to pursue some adventurous avenues also in the field of assessment.

So, a lot of stuff is happening for the journal but also for the assessment community in general. With this in mind, what is in your opinion the greatest challenge the field of assessment faces over the next 10 years?

MZ

Good question. I see several challenges. First of all, I think we are in a decisive period regarding psychological constructs. If you look into personality psychology for example, there are a number of “new” constructs such as the dark triad that have inspired a lot of research. Moreover, the traditional outlook on traits as being stable across time and situations is increasingly challenged (Ziegler & Horstmann, 2015) as the work by Rauthmann (e.g., Rauthmann et al., 2014; Rauthmann, Sherman, & Funder, 2015; Rauthmann & Sherman, 2016a, 2016b) or Fleeson (e.g., Fleeson & Jayawickreme, 2015) impressively shows. Our traditional instruments, mostly questionnaires, are not really suited to capture such situation-dependent fluctuations. I think that it will be necessary to use other techniques such as experience sampling (Hopwood, Zimmermann, Pincus, & Krueger, 2015) or virtual reality (Mertens & Allen, 2008; Neubauer, Bergner, & Schatz, 2010 or, well, I don’t know… (laughs). In the end, I believe that our ability to satisfy the growing need for reliable and valid measures for all kinds of constructs affecting every day live will be tested.

What do you think about the major challenges?

SG

Honestly speaking, I think I will be in a much better (and wiser) position to answer this after my editorial term and it will be much easier for me to do so then from a “wise man’s position” once I pass on my editorial duties (laughs). I am, however, convinced that over the next years we will experience many changes and associated challenges, but most of them will be continuously. It is probably only in retrospect that we will really see what has changed. When I picture myself having a similar conversation after my editorial term, I imagine that we might have seen these changes by then in a more pronounced way than now. First of all, additional advancements from a methodological perspective are likely to happen (e.g., Eid, Geiser, Koch, & Heene, 2016). It is fascinating how quickly this field still emerges. When I think back what was absolutely state of the art 10 years ago is often expected standard today. I also believe that we will see new ways in which assessments are delivered and that might bear strong implications for validity and the underlying constructs (e.g., Shute, Leighton, Jang, & Chu, 2016). This will not only include computers, but also smartphones, tablets, online assessments, and so forth. From this, I also expect to see more (and more sophisticated) use of process measures for, amongst others, increasing our understanding of the underlying mechanisms reflected in an assessment outcome (e.g., Goldhammer et al., 2014; Greiff, Wüstenberg, & Avvisati, 2015).

Now, when we draw the circle back to what this means for the journal, what do you think are the direct implications?

MZ

Good question. Again (smiles). I see implications on both the journal and author side. The journal needs to be more flexible and open to such new trends. This of course requires an able editorial team and equivalently minded reviewers. I also think that open science aspects are largely overlooked in assessment and assessment outlets. That needs to change fast. We need to be able to trust the measures we are using. The authors also need to be more open and maybe adventurous (laughs). At the same time, the set of quality standards should not be weakened.

Anyway, I guess it is now your call. So again, as you are taking over, what do you think?

SG

Thanks for passing on the burden to me (laughs). As I see it, the most important aspect has been, is, and will continue to be the scientific quality of the work that we publish. This is the overall framework for all publication decisions and this is set in stone. Within this framework, we will try to explore innovative and promising avenues. Both from a content perspective – I have been working on 21st century skills and, together with other areas, this might be a promising field – and from a formal perspective – open science is an important keyword and a relevant topic also for EJPA. For putting all of this into reality, we need a community that is strongly committed to these causes and a well-functioning journal. So, in a way I feel I am in a comfortable position of taking over a journal that is already well positioned in the field and that is carried and supported by so great people. Thanks to everybody involved, but in particular to you, Matthias, for doing an incredible job over the last 4 years. Needless to say, I am glad that you could be convinced to remain part of the team as Associate Editor for the field of personality assessment.

Now, how to say best at the end? Should we say “This is not an ending, but a new beginning” or should we just say very simply “Let’s get to work”….

1The current IF for 2015 is 1.969.

References

  • Eid, M., Geiser, C., Koch, T. & Heene, M. (2016). Anomalous results in g-factor models. Explanations and alternatives. Psychological Methods. Advance online publication. doi: 10.1037/met0000083 First citation in articleCrossrefGoogle Scholar

  • Fleeson, W. & Jayawickreme, E. (2015). Whole trait theory. Journal of Research in Personality, 56, 82–92. doi: 10.1016/j.jrp.2014.10.009 First citation in articleCrossrefGoogle Scholar

  • Goldhammer, F., Naumann, J., Stelter, A., Toth, K., Rölke, H. & Klieme, E. (2014). The time on task effect in reading and problem solving is moderated by task difficulty and skill: Insights from a computer-based large-scale assessment. Journal of Educational Psychology, 106, 608–626. doi: 10.1037/a0034716 First citation in articleCrossrefGoogle Scholar

  • Greiff, S., Wüstenberg, S. & Avvisati, F. (2015). Computer-generated log-file analyses as a window into students’ minds? A showcase study based on the PISA 2012 assessment of problem solving. Computers & Education, 91, 92–105. doi: 10.1016/j.compedu.2015.10.018 First citation in articleCrossrefGoogle Scholar

  • Hopwood, C. J., Zimmermann, J., Pincus, A. L. & Krueger, R. F. (2015). Connecting personality structure and dynamics. Towards a more evidence-based and clinically useful diagnostic scheme. Journal of Personality Disorders, 29, 431–448. doi: 10.1521/pedi.2015.29.4.431 First citation in articleCrossrefGoogle Scholar

  • Mertens, R. & Allen, J. J. B. (2008). The role of psychophysiology in forensic assessments. Deception detection, ERPs, and virtual reality mock crime scenarios. Psychophysiology, 45, 286–298. First citation in articleCrossrefGoogle Scholar

  • Neubauer, A. C., Bergner, S. & Schatz, M. (2010). Two- vs. three-dimensional presentation of mental rotation tasks. Sex differences and effects of training on performance and brain activation. Intelligence, 38, 529–539. doi: 10.1016/j.intell.2010.06.001 First citation in articleCrossrefGoogle Scholar

  • Rauthmann, J. F., Gallardo-Pujol, D., Guillaume, E. M., Todd, E., Nave, C. S., Sherman, R. A., … Funder, D. C. (2014). The Situational Eight DIAMONDS. A taxonomy of major dimensions of situation characteristics. Journal of Personality and Social Psychology. Advance online publication. doi: 10.1037/a0037250 First citation in articleCrossrefGoogle Scholar

  • Rauthmann, J. F. & Sherman, R. A. (2016a). Measuring the Situational Eight DIAMONDS characteristics of situations. An optimization of the RSQ-8 to the S8*. European Journal of Psychological Assessment, 32, 155–164. doi: 10.1027/1015–5759/a000246 First citation in articleLinkGoogle Scholar

  • Rauthmann, J. F. & Sherman, R. A. (2016b). Ultra-brief measures for the Situational Eight DIAMONDS domains. European Journal of Psychological Assessment, 32, 165–174. doi: 10.1027/1015-5759/a000245 First citation in articleLinkGoogle Scholar

  • Rauthmann, J. F., Sherman, R. A. & Funder, D. C. (2015). Principles of situation research. Towards a better understanding of psychological situations. European Journal of Personality, 29, 363–381. doi: 10.1002/per.1994 First citation in articleCrossrefGoogle Scholar

  • Shute, V., Leighton, J. P., Jang, E. E. & Chu, M.-W. (2016). Advances in the science of assessment. Educational Assessment, 2016, 1–27. doi: 10.1080/10627197.2015.1127752 First citation in articleCrossrefGoogle Scholar

  • Ziegler, M. (2014a). Comments on item selection procedures. European Journal of Psychological Assessment, 30, 1–2. doi: 10.1027/1015-5759/a000196 First citation in articleLinkGoogle Scholar

  • Ziegler, M. (2014b). Stop and state your intentions! Let’s not forget the ABC of test construction. European Journal of Psychological Assessment, 30, 239–242. doi: 10.1027/1015-5759/a000228 First citation in articleLinkGoogle Scholar

  • Ziegler, M. (2015). “F*** you, I won’t do what you told me!” Response biases as threats to psychological assessment. European Journal of Psychological Assessment, 31, 153–158. doi: 10.1027/1015-5759/a000292 First citation in articleLinkGoogle Scholar

  • Ziegler, M. & Bensch, D. (2013). Lost in translation. Thoughts regarding the translation of existing psychological measures into other languages. European Journal of Psychological Assessment, 29, 81–83. doi: 10.1027/1015-5759/a000167 First citation in articleLinkGoogle Scholar

  • Ziegler, M. & Horstmann, K. (2015). Discovering the second side of the coin. Integrating situational perception into psychological assessment. European Journal of Psychological Assessment, 31, 69–74. doi: 10.1027/1015-5759/a000258 First citation in articleLinkGoogle Scholar

Matthias Ziegler, Institut für Psychologie, Humboldt Universität zu Berlin, Rudower Chaussee 18, 12489 Berlin, Germany,
Samuel Greiff, Cognitive Science & Assessment, University of Luxembourg, 11, Porte des Sciences, 4366 Esch-sur-Alzette, Luxembourg,