Skip to main content
Free AccessEditorial

Perceptual and Cognitive Assessment

Published Online:https://doi.org/10.1027/1015-5759/a000148

Cognition including perceptual processes has dominated psychological research during the last decades. However, articles reporting perceptual and cognitive measures with an assessment focus excluding achievement measures, i.e., articles emphasizing the psychometric quality of measures, are rare. For this reason we concluded that a special issue of European Journal of Psychological Assessment should present perceptual and cognitive measures in accordance with the established tradition of psychometric research. We asked colleagues who have done research in this area and are experts in the field to share their expertise and to present major research tools from a psychometric perspective.

Although the European Journal of Psychological Assessment covers the area of psychological assessment in total, the majority of published research comes from the areas of personality, attitude, and values questionnaires (Alonso-Arbiol & van de Vijver, 2010), with perceptual and cognitive measures being rarely considered. The category of cognitive and educational tests that includes these measures even showed a decrease from the 1992–1996 period through the 2005–2009 period: During the latter period only 6.4% of the published papers belonged to this category. In fact, in the most recent evaluation of the journal the category was not even listed (Ortner & Vormittag, 2011). During the last 2 years only three articles from this category have appeared in European Journal of Psychological Assessment (Blickle, Kramer, & Mierke, 2010; Blickle, Momm, Liu, Witzki, & Steinmayr, 2011; Gignac, 2010).

One possible reason for the underrepresentation of articles reporting on perceptual and cognitive measures is that such measures are often constructed mainly for the purpose of research and not for individual assessment. Such research measures must show a high degree of construct validity, although an investigation according to the famous design proposed by Campbell and Fiske (1959) is usually not provided. Furthermore, other qualities characterizing the construction of a psychometrically sound test or questionnaire (e.g., Schweizer, 2010) are usually not considered and presumably regarded as relatively unimportant. The neglect of these qualities is especially obvious in the missing reports on reliability statistics. Instead, reports on experiments using perceptual and cognitive measures usually provide information on effect sizes. In the main, not even the internal consistency measure usually found in reports on test or questionnaire construction (e.g., Schweizer, 2011) is considered.

The articles of this special issue can be divided into four subcategories: measures for assessing attention, metacognition, perception, and working memory. Furthermore, the research can be classified as psychometric versus validation studies. Most articles concentrate on metacognition. Efklides and Vlachopoulos (2012) present a measure of the metacognition related to mathematics. French, Hand, Therrien, and Vazquez (2012) investigate a measure of critical thinking. Nęcka, Lech, Sobczyk, and Śmieja (2012) introduce a measure of cognitive control that refers to executive functions and, thus, relates metacognition to working memory. The contribution by Stankov, Pallier, Danthiir, and Morony (2012) explores confidence in perception as a complementary assessment dimension. The second largest group includes three articles on the assessment of working memory. The article by Birney, Bowman, Beckman, and Seah (2012) highlights the assessment of processing capacity from the perspective of relational complexity theory. Redick et al. (2012) provide a psychometric evaluation of their complex span task for the assessment of working memory capacity. This group of articles also includes a description and psychometric evaluation of the web version of the Exchange Test that focuses on the updating function of working memory (Schreiner, Altmeyer, & Schweizer, 2012). Third, a paper from the perception category focuses on the assessment of the ability to estimate time duration (Rammsayer, 2012). Finally, an article presents a validation study for a measure of attention in a specific population (Rauch, Gold, & Schmitt, 2012).

So the field of psychological assessment is alive and well, not the least the result of the enduring demands from diverse fields of applied psychology, many of which are largely assessment based. In this field classical trait/questionnaire and performance test approaches will retain its high relevance, though new diagnostic approaches should enhance the – very desired – heterogeneity in the field. If we look at the first group of papers (on metacognition as well as the paper by Rauch et al.), we see partially low correlations between self-estimates of performance and the performance itself. We think that more work is needed to elucidate the reasons for this low correspondence, which reminds us of many findings in other domains not captured here, e.g., the low correspondence between intellectual abilities and (work) interests (e.g., Ackerman & Heggestad, 1997). The high importance of distinguishing between performance itself and the metacognition on one’s own level of performance can be derived from such findings. We need a greater understanding of why, under which conditions, for which traits, etc., the correspondence is lower or higher. And why are processes of metacognition not homogeneous in an individual, but can diverge even within one domain (as shown by Stankov et al., 2012, for perceptual processes)? With respect to the second and third group of papers it seems remarkable that – after many 20th-century decades of focusing on rather broad cognitive abilities – the old Galtonian/J. McKeen Cattellian reductionist approach (see Deary, 2000) has experienced a remarkable renaissance since the 1980s, starting with the bulk of research on the relationships between speed of information processing and intelligence (see Jensen, 1998) and then followed by the large body of research into the relationships between intelligence and working memory/executive functions (see the meta-analysis of Ackerman, Beier & Boyle, 2005). This development points to a continued interest in elucidating the causes for individual differences in more complex traits, which, moreover, can also be observed in the increased research efforts in neuroscience approaches to individual differences variables such as intelligence, creativity, and personality (e.g., Neubauer & Fink, 2009). In this vein, we hope that this special issue will stimulate further research on basic processes in psychological assessment as well as on the metacognition of one’s own level of performance.

References

  • Ackerman, P. L. , Beier, M. E. , O Boyle, M. (2005). Working memory and intelligence: The same or different constructs? Psychological Bulletin, 131, 30–60. First citation in articleCrossrefGoogle Scholar

  • Ackerman, P. L. , Heggestad, E. D. (1997). Intelligence, personality and interests: Evidence for overlapping traits. Psychological Bulletin, 121, 219–245. First citation in articleCrossrefGoogle Scholar

  • Alonso-Arbiol, I. , van de Vijver, F. (2010). A historical analysis of the European Journal of Psychological Assessment: A comparison of the earliest (1992–1996) and the latest years (2005–2009). European Journal of Psychological Assessment, 26, 238–247. doi: 10.1027/1015-5759/a000032 First citation in articleLinkGoogle Scholar

  • Birney, D. P. , Bowman, D. B. , Beckman, J. , Seah, Y. Z. (2012). Assessment of processing capacity: Reasoning in Latin Square tasks in a population of managers. European Journal of Psychological Assessment, 28, 216–226. doi: 10.1027/1015-5759/a000146 First citation in articleLinkGoogle Scholar

  • Blickle, G. , Kramer, J. , Mierke, J. (2010). Telephone-administered intelligence testing for research in work and organizational psychology: A comparative assessment study. European Journal of Psychological Assessment, 26, 154–161. doi: 10.1027/1015-5759/a000022 First citation in articleLinkGoogle Scholar

  • Blickle, G. , Momm, T. , Liu, Y. , Witzki, A. , Steinmayr, R. (2011). Construct validation of the Test of Emotional Intelligence (TEMINT): A two-study investigation. European Journal of Psychological Assessment, 27, 282–289. doi: 10.1027/1015-5759/a000075 First citation in articleLinkGoogle Scholar

  • Campbell, D. T. , Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56, 81–105. First citation in articleCrossrefGoogle Scholar

  • Deary, I. J. (2000). Looking down on human intelligence. Oxford, UK: University Press. First citation in articleCrossrefGoogle Scholar

  • Efklides, A. , Vlachopoulos, S. P. (2012). Measurement of metacognitive knowledge of self, task, and strategies in mathematics. European Journal of Psychological Assessment, 28, 227–239. doi: 10.1027/1015-5759/a000145 First citation in articleLinkGoogle Scholar

  • French, B. F. , Hand, B. , Therrien, W. J. , Vazquez, J. A. V. (2012). Detection of sex differential item functioning in the Cornell Critical Thinking Test. European Journal of Psychological Assessment, 28, 201–207. doi: 10.1027/1015-5759/a000127 First citation in articleLinkGoogle Scholar

  • Gignac, G. (2010). Seven-factor model of emotional intelligence as measured by Genos EI: A confirmatory factor analytic investigation based on self- and rater-report data. European Journal of Psychological Assessment, 26, 309–316. doi: 10.1027/1015-5759/a000041 First citation in articleLinkGoogle Scholar

  • Jensen, A. R. (1998). The g-factor. The science of mental ability. Westport, CT: Praeger. First citation in articleGoogle Scholar

  • Nęcka, E. , Lech, B. , Sobczyk, B. , Śmieja, M. (2012). How much do we know about our own cognitive control? Self-report and performance measures of executive functions. European Journal of Psychological Assessment, 28, 240–247. doi: 10.1027/1015-5759/a000147 First citation in articleLinkGoogle Scholar

  • Neubauer, A. C. , Fink, A. (2009). Intelligence and Neural Efficiency. Neuroscience and Biobehavioral Reviews, 33, 1004–1023. First citation in articleCrossrefGoogle Scholar

  • Ortner, T. M. , Vormittag, L. (2011). Articles published in EJPA 2009–2010: An analysis of the features of the articles and the characteristics of the authors. European Journal of Psychological Assessment, 27, 290–198. doi: 10.1027/1015-5759/a000082 First citation in articleLinkGoogle Scholar

  • Rammsayer, T. H. (2012). Developing a psychophysical measure to assess duration discrimination in the millisecond range. European Journal of Psychological Assessment, 28, 172–180. doi: 10.1027/1015-5759/a000124 First citation in articleLinkGoogle Scholar

  • Rauch, W. A. , Gold, A. , Schmitt, K. (2012). Combining cognitive and personality measures of impulse control in the assessment of childhood ADHD. European Journal of Psychological Assessment, 28, 208–215. doi: 10.1027/1015-5759/a000128 First citation in articleLinkGoogle Scholar

  • Redick, T. S. , Broadway, J. M. , Meier, M. E. , Kuriakose, P. S. , Unsworth, N. , Kane, M. J. , Engle, R. W. (2012). Measuring working memory capacity with automated complex span tasks. European Journal of Psychological Assessment, 28, 164–171. doi: 10.1027/1015-5759/a000123 First citation in articleLinkGoogle Scholar

  • Schreiner, M. , Altmeyer, M. , Schweizer, K. (2012). The web version of the Exchange Test: Description and psychometric properties. European Journal of Psychological Assessment, 28, 181–189. doi: 10.1027/1015-5759/a000125 First citation in articleLinkGoogle Scholar

  • Schweizer, K. (2010). Some guidelines concerning the modeling of traits and abilities in test construction. European Journal of Psychological Assessment, 26, 1–2. doi: 10.1027/1015-5759/ a000001 First citation in articleLinkGoogle Scholar

  • Schweizer, K. (2011). On the changing role of Cronbach’s alpha in the evaluation of the quality of a measure. European Journal of Psychological Assessment, 27, 143–144. doi: 10.1027/1015-5759/a000069 First citation in articleLinkGoogle Scholar

  • Stankov, L. , Pallier, G. , Danthiir, V. , Morony, S. (2012). Perceptual underconfidence: A conceptual illusion? European Journal of Psychological Assessment, 28, 190–200. doi: 10.1027/1015-5759/a000126 First citation in articleLinkGoogle Scholar

Karl Schweizer, Department of Psychology, Goethe University Frankfurt, Mertonstr. 17, 60054 Frankfurt a. M., Germany, +49 69 798-22081, +49 69 798-23847,
Aljoscha C. Neubauer, Department of Psychology, University of Graz, Universitätsplatz 2, 8010 Graz, Austria, +43 316 380-5124, +43 316 380-9811,