Skip to main content
Free AccessEditorial

Advances and Continuing Challenges in Objective Personality Testing

Published Online:https://doi.org/10.1027/1015-5759/a000213

The use of objective behavioral indicators instead of self-reported behaviors and self-ratings has a long history in personality assessment. The basic idea of objective personality tests (OPTs) can be traced back to James McKeen Cattell’s proposal of mental tests in 1890. A few decades later, OPT procedures were employed by the German and US militaries during World War II (see Fitts, 1946). Until today, most tests designed for the objective assessment of personality have been based on Raymond Bernard Cattell’s comprehensive theoretical and empirical work and his well-known postulate that a complete investigation of personality requires heterogeneous data sources including self-report data (Q-data), life indicators of personality often obtained from observer reports (L-data), and objective performance or test data (Cattell, 1946; Cattell & Kline, 1977).

At present, we have a very large and diverse collection of OPTs at our disposal. More recent developments in OPTs have been inspired and facilitated by the availability of digital technologies that have become more powerful and affordable at the same time. The computer as a tool has contributed greatly to innovative and ingenious procedures for item and task presentation and the precise registration of behavioral responses (see Ortner et al., 2007). In contrast to the OPT procedures proposed by Cattell and his students, most of the OPTs that were developed during the 1990s and later are not bound to holistic personality approaches and are typically designed for the assessment of single constructs (e.g., Lejuez, Richards, et al., 2002; Proyer, 2007). These OPTs are most diverse with regard to task concepts, materials, and scoring methods. They include personality tests masked as achievement tasks (e.g., Kubinger & Ebenhöh, 1996; Schmidt-Atzert, 2007), more or less complex tasks embedded in simulated real-life situations (e.g., Aguado, Rubio, & Lucía, 2011; Rubio, Hernández, Zaldivar, Marquez, & Santacreu, 2010), and questionnaire-type OPTs that ask for evaluations or decisions yet assess different constructs than those that might be suggested by the item content (e.g., Jasper & Ortner, 2014). Despite this diversity, all OPTs share as a common principle the use of observable behavior on performance tasks or other highly standardized miniature situations (Cattell & Warburton, 1967) as personality indicators. As a second common feature, OPTs typically lack face validity (see Cattell, 1958; Schmidt, 1975). Therefore, and because of the frequent use of performance-based indicators as well as nontransparent scoring rules, OPTs are less susceptible to faking than Q-data and L-data (see Cattell & Kline, 1977). In support of this claim, several studies have impressively shown that OPT scores are more difficult to fake than questionnaires (e.g., Arendasy, Sommer, Herle, Schützhofer, & Inwanschitz, 2011; Elliot, Lawty-Jones, & Jackson, 1996; Hofmann & Kubinger, 2001; Ziegler, Schmidt-Atzert, Bühner, & Krumm, 2007). Given this important advantage, it seems somewhat surprising that journal articles and book chapters dealing with OPTs are still scarce in the psychological literature, especially when compared to the enormous amount of attention that has been paid to the so-called indirect measurement approaches that have been proposed during the last 15 years (De Houwer, Teige-Mocigemba, Spruyt, & Moors, 2009; Fazio & Olson, 2003; Greenwald, Poehlman, Uhlmann, & Banaji, 2009).

1We are aware of the fact that a number of procedures introduced for the assessment of implicit dispositions (often called indirect measures) such as the IAT have been considered to be OPTs. We would like to distinguish between indirect measures that capture mental associations and OPTs. OPTs as we understand them employ more realistic behavioral expressions of personality traits in simulated miniature situations. We agree that this definition is not perfectly sharp and that, for example, the manuscript by Quirin and Bode (2014) in this special issue may represent an OPT in the twilight zone between this definition and the definition of indirect measures.

A number of reasons may have contributed to this situation, and we would like to take the opportunity to address a selection of these factors here.

A first reason for the infrequent use of OPTs in research and applied assessment is probably the result of the inconsistent naming of OPT procedures. Inconsistent terminology always creates challenges for narrative reviews, textbooks, and quantitative meta-analyses. In the case of OPTs, different terms have been proposed and used even for the same group of tests. In addition to older terms such as Objective-Analytic Tests (Cattell, 1955), Cursive Miniature Situations (Cattell, 1941, 1944), or simply Objective Tests (e.g., Cattell, 1946), these tests have also been called Performance Tests of Personality (Cronbach, 1970), Behavioral Measures (Lejuez, Read, et al., 2002), and Experiment-Based Assessments (Kubinger, 2009). And as if this Babylonian confusion weren’t enough, the term Objective Personality Tests is sometimes used to refer to self-report questionnaires (e.g., Meyer & Kurtz, 2006).

A second factor that seems to have slowed the use of OPTs is the vast diversity of conceptualizations, designs, items, tasks, required responses, and scoring rules. Whereas questionnaires are rather homogeneous in item format, design, and response alternatives, this is not true for OPTs. For example, some OPTs require reactions to tachistoscopically presented stimuli (e.g., Proyer, 2007; Proyer & Häusler, 2007), whereas others employ viewing time, reaction time (e.g., Proyer, 2007), reaction speed (Schmidt-Atzert, 2007), pumping up a balloon (Lejuez, Read, et al., 2002), navigating a figure in a maze (Ortner, Kubinger, Schrott, Radinger, & Litzenberger, 2006), the number of times a test taker clicks to close a pop-up video clip in order to continue working on an initial assignment (Poorthuis, Thomaes, Denissen, de Castro, & van Aken, 2014), the distribution of money between oneself and another person, or the investment of money for altruistic punishment (Baumert, Schlösser, & Schmitt, 2014).

Third, and unlike indirect assessment approaches such as the Implicit Association Tests (Greenwald, McGhee, & Schwartz, 1998) or the Affective Misattribution Procedure (Payne, Cheng, Govorun, & Steward, 2005), which can be used for the assessment of many constructs (attitudes, self-concept, self-esteem, motives), a specific OPT paradigm can usually be employed only for the assessment of one particular construct. As a consequence, the more flexible indirect measurement procedures may be more attractive for researchers and practitioners compared to the less flexible OPTs. In addition, not every construct can be easily measured via overt behavior. Certain traits such as extraversion can be measured well with self-report questionnaires and self-ratings, but their measurement may be difficult to implement with OPTs (Pawlik, 2006).

Fourth, due to the face validity, homogenous make up, and frequent use of self-report questionnaires, most researchers and practitioners have a good guess about what they measure and how reliable they are. This is not true for many implicit procedures and OPTs due to their large diversity, nontransparent nature, and sometimes lack of intuitive scoring (see Ortner, Proyer, & Kubinger, 2006). Given these features, the reliability and validity of these methods are more difficult to guess and in fact are also more difficult to determine empirically (see Pawlik, 2006). As a consequence, researchers and practitioners may feel less comfortable about using them when familiar and easy-to-use questionnaires are available.

Fifth, and perhaps most importantly, questionnaires and self-ratings of the same construct tend to converge reasonably well with corresponding L-data. By contrast, low convergence has been observed between T-data and Q-data and between T-data and L-data ever since OPTs were systematically developed by Cattell and his team (Häcker, Schmidt, Schwenkmezger, & Utz, 1975; Hundleby, Pawlik, & Cattell, 1965; Proyer & Häusler, 2008; Ziegler, Schmukle, Egloff, & Bühner, 2010). The same lack of convergence has been found for indirect measurement procedures such as the IAT (Hofmann, Gawronski, Gschwendner, Le, & Schmitt, 2005; Nosek, 2007). It is difficult to know what this lack of convergence means (Gschwendner, Hofmann, & Schmitt, 2008). It may mean that OPTs (and indirect procedures) measure personality components that cannot be measured with self-report measures (Wilson, Lindsey, & Schooler, 2000). Of course, it may also mean that OPTs lack validity. The latter interpretation does not seem to be convincing for two reasons: At least some OPTs converge reasonably well with self-reports (see Dislich, Zinkernagel, Ortner, & Schmitt, 2010; Proyer, 2007). Moreover, substantial predictive validity estimates using relevant outcomes have been reported for several OPTs. To address impressive examples from the domain of risk propensity assessment, scores on the Balloon Analogue Risk Task (BART; Lejuez, Read, et al., 2002) and the Risk Propensity Task (Aguado, Rubio, & Lucía, 2011) were significantly related to risk-related health behavior (Aguado et al., 2011; Lejuez, Aklin, Jones, et al., 2003; Lejuez, Aklin, Zvolensky, & Pedulla, 2003). The Crossing the Street Test and the Roulette Test (Santacreu, Rubio, & Hernández, 2006) were revealed to predict guessing tendencies on a multiple-choice test (Rubio et al., 2010). We will later return to the question of what the low convergence between T-data and Q-data may mean and how the construct validity of OPTs can be investigated by going beyond the traditional strategy of convergent and discriminant validation as employed in the multitrait-multimethod framework proposed by Campbell and Fiske (1959).

The present special issue of the European Journal of Psychological Assessment was inspired by our desire to collect recent work on Objective Personality Testing, to explore advances as well as continuing challenges, and to give some advice on how to meet these challenges in future research based on the compiled articles and additional insights we have obtained from related research programs (see the 2008 Special Issue of the European Journal of Psychological Assessment on “Advances and challenges in the indirect measurement of individual differences at age 10 of the Implicit Association Test,” edited by Hofmann and Schmitt, 2008).

In the first out of five original papers in the present special issue on Objective Personality Tests, Poorthuis et al. (2014) report promising results on the incremental validity of two new OPTs for assessing conscientiousness and agreeableness in school children. Their longitudinal study revealed significant predictive validity regarding changes in children’s school achievement and social acceptance across the transition to secondary school.

Based on Latent State-Trait Theory (Steyer, Schmitt, & Eid, 1999), the second paper identified situation-specific and stable trait components of behavior in experimental games (Baumert et al., 2014). Although not classified as OPTs in the literature, experimental games such as the Dictator Game and the Ultimatum Game meet the criteria for OPTs established by Cattell and are accepted by the majority of authors who employ these methods. Experimental or economic games, as they are often called, are widely used in social psychology and in behavioral economics. In remarkable contrast to their frequent use, the reliability of these procedures has hardly ever been tested. Moreover, the Baumert et al. (2014) study is the first to estimate the degree to which behavior in these games reflects stable traits versus unstable influences of the measurement situation. The parameter estimates of the best-fitting model revealed that the proportion of stable variance in experimental game behavior, as opposed to systematic but occasion-specific variance, is remarkably similar to the proportion of variance accounted for by self-report personality scales.

Jasper and Ortner (2014) present new OPTs assessing the degree of heuristic thinking associated with three particular thinking heuristics (representativeness, availability, anchoring). Their Objective Heuristic Thinking Test was evaluated with regard to its reliability, factor structure, construct validity, and stability. The a priori factor structure was confirmed. The internal consistencies of the representativeness scale and the availability scale were good, whereas the internal consistency of the anchoring scale was limited, reflecting the content heterogeneity of the quantitative estimates that have to be made to capture this style of heuristic thinking. This initial validity evidence was obtained through a large number of correlations with theoretically related constructs (e.g., field-independency) and theoretically unrelated constructs (e.g., extraversion). Finally, high temporal stability was found for all three OPT scores.

The question of convergence between OPTs, IATs, and questionnaires was addressed by Koch, Ortner, Eid, Caspers, and Schmitt (2014), who employed a refined version of the traditional multitrait-multimethod (MTMM; Campbell & Fiske, 1959) model by including the time facet. The multimethod latent-state-trait (MM-LST) model proposed by Courvoisier, Nussbeck, Eid, and Cole (2008) allows for the decomposition of measurement variables into stable and momentary trait influences, stable and momentary method influences, and measurement error influences. The study included two constructs (conscientiousness and intelligence) that were measured with three methods (OPT, IAT, self-report) on three occasions. Parameter estimates of the best-fitting MM-LST model revealed that the OPTs assessed stable rather than momentary components of the constructs. Moreover, a substantial degree of trait-method specificity was identified, meaning that trait components assessed with OPTs hardly overlapped with trait components measured with IATs and self-reports. In line with the studies reported by Poorthuis et al. (2014), predictive validity was found for the conscientiousness OPT.

The last paper presents a typical example of OPTs that are based on judgments collected with a paper-and-pencil method that is similar in format to self-report questionnaires and self-ratings. In their review article, Quirin and Bode (2014) provide an overview of research using the Implicit Positive and Negative Affect Test (IPANAT). Participants judge the extent to which nonsense words from an artificial language express certain affective states or traits. The test is reliable, demonstrates good factor validity, and predicts theoretically relevant criteria of affectivity such as changes in levels of cortisol in reaction to a stressful situation. The authors conclude their highly informative and convincing review with a proposal for novel variants of the IPANAT procedure for the assessment of the discrete emotions happiness, anger, sadness, and fear. Despite its conventional surface-level format, this OPT, like all other OPTs that have been presented in this special issue, seems to have great potential to impact future developments in this field.

Taken together, the papers we have compiled in this special issue show that (a) the diversity of procedures continues to be a peculiarity of OPTs, (b) considerable advances have been made in estimating the reliability of OPTs and separating unreliability from true change across measurement occasions, (c) OPTs continue to display a varying and fragile degree of convergence with other measurement methods that presumably measure the same construct, (d) the basic idea of OPTs contains a rich potential for the development of specific procedures, and (e) the question of what exactly OPTs measure and how this differs from other methods has yet to be answered adequately. Convincing answers to this question and convincing strategies for estimating the construct validity of OPTs remain challenges.

We believe that these challenges can be handled if we accept, at least preliminarily, the possibility that OPTs, like indirect procedures, measure personality components that are distinct from those that materialize in Q-data and L-data (Gschwendner et al., 2008; Wilson, Lindsey, & Schooler, 2000). Moreover, it might be useful to accept – again, at least preliminarily – that OPTs (and indirect procedures) measure personality components that cannot be measured or can be measured less well via self-report questionnaires (e.g., Häusler, 2004; Ortner, 2012). Finally, it might useful to accept the idea that different OPTs that were developed as measures of the same construct measure nonoverlapping or only partly overlapping components of that construct. If we accept these possibilities, the classic multitrait-multimethod framework and its validity criterion of convergence may no longer be the king’s road for estimating validity. Moreover, method factors may no longer be an appropriate term for causes of nonconvergence between measures as these causes may be substantive in nature and may carry the potential to advance substantive theories of behavior (Schmitt, 2006).

What other framework can be used to solve the challenges we have identified? If we take Cronbach and Meehl’s (1955) definition of construct validity seriously and agree that a test’s construct validity is given when empirical data confirm claims that were made based on a theory describing the given construct, then we need a theory that goes beyond simple assumptions about the (cor)relations between constructs. We need more sophisticated theories that include boundary conditions of relations and effects between constructs, interactions between constructs, nonlinear effects and relations between these constructs, as well as processes and changes. Moreover, these theories must not be limited to personality factors. Rather, they should include situational influences on behavior (including the behaviors shown in assessment methods) and person x situation interactions (Dislich et al., 2012; Schmitt, 2009a, 2009b; Schmitt & Baumert, 2010; Schmitt et al., 2013).

Based on dual-process theories of information processing and behavior (e.g., Gawronski & Bodenhausen, 2006; Strack & Deutsch, 2004), a new theoretical model was recently proposed, one that may serve as a starting point for interpreting results on the convergence and divergence of OPTs with questionnaires and indirect measures (Schmitt, Hofmann, Gschwendner, Gerstenberg, & Zinkernagel, in press). According to this model of moderated convergence between direct, indirect, and behavioral measures of personality traits, OPTs may tap either more spontaneous or else more reflective components of a construct and may therefore converge better with either indirect measures or questionnaires. Initial empirical results further support the claim that additional moderators of convergence serve as a starting point from which to formulate more complex but also more successful hypotheses on the validity of OPTs (Dislich et al., 2010). Potential moderators of convergence are personality traits, situational characteristics, attributes of the construct, and attributes of the measurement procedure. Furthermore, research on the consistency between direct and indirect personality measures suggests that the degree of convergence between OPTs and other methods will depend on the conceptual correspondence between the employed measures and the constructs as well as their similarity in specificity and content (Friese, Hofmann, & Schmitt, 2008; Gschwendner et al., 2008; Hofmann, Gschwendner, Nosek, & Schmitt, 2006).

To summarize: OPTs are very diverse, and findings on the psychometric properties of one OPT cannot be generalized to other OPTs. Recent research on OPTs has provided promising results on the psychometric properties of a number of OPTs, especially on their validity, and these findings are supported by the new findings we present here. We would like to encourage the readers of this special issue to add to the growing body of research by submitting OPTs to rigorous theoretical, methodological, and empirical analyses. There is a need for research on the psychometric properties of existing or new OPTs in different populations and in different domains of psychology (e.g., forensic, health, clinical, occupational). We further encourage readers to add their theoretical input to address behavior-based methods of assessment and their relation to existing theories. We expect that bringing OPTs further into research and practice will not only change our views on personality assessment in the future. We also expect that enlarging the toolbox of psychological assessment methods will enable more valid predictions of relevant behavior and outcomes in the long term.

References

  • Aguado, D. , Rubio, V. J. , Lucía, B. (2011). The Risk Propensity Task (PTR): A proposal for a behavioral performance-based computer test for assessing risk propensity. Anales de Psicologia, 27, 862–870. First citation in articleGoogle Scholar

  • Arendasy, M. , Sommer, M. , Herle, M. , Schützhofer, B. , Inwanschitz, D. (2011). Modeling effects of faking on an objective personality test. Journal of Individual Differences, 32, 210–218. doi: 10.1027/1614-0001/a000053 First citation in articleLinkGoogle Scholar

  • Baumert, A. , Schlösser, T. , Schmitt, M. (2014). Economic games: A performance-based assessment of fairness and altruism. European Journal of Psychological Assessment, 30, 178–192. doi: 10.1027/1015-5759/a000183 First citation in articleLinkGoogle Scholar

  • Campbell, D. T. , Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56, 81–105. First citation in articleCrossrefGoogle Scholar

  • Cattell, R. B. (1941). An objective test of character-temperatment I. Journal of General Psychology, 25, 59–73. First citation in articleCrossrefGoogle Scholar

  • Cattell, R. B. (1944). An objective test of character-temperatment II. Journal of Social Psychology, 19, 99–114. First citation in articleCrossrefGoogle Scholar

  • Cattell, R. B. (1946). Description and measurement of personality. New York, NY: World Book. First citation in articleGoogle Scholar

  • Cattell, R. B. (1955). Handbook for the objective-analytic personality test batteries: (including adult and child O-A batteries). Savoy, IL: Institute for Personality and Ability Testing. First citation in articleGoogle Scholar

  • Cattell, R. B. (1958). What is “objective” in “objective personality tests”? Journal of Consulting Psychology, 5, 285–289. First citation in articleCrossrefGoogle Scholar

  • Cattell, R. B. , Kline, P. (1977). The scientific analysis of personality and motivation. London, UK: Academic Press. First citation in articleGoogle Scholar

  • Cattell, R. B. , Warburton, F. W. (1967). Objective personality and motivation tests: A theoretical introduction and practical compendium. Chicago, IL: University of Illinois Press. First citation in articleGoogle Scholar

  • Courvoisier, D. S. , Nussbeck, F. W. , Eid, M. , Cole, D. A. (2008). Analyzing the convergent and discriminant validity of states and traits: Development and applications of multimethod latent state-trait models. Psychological Assessment, 20, 270–280. doi: 10.1037/a0012812 First citation in articleCrossrefGoogle Scholar

  • Cronbach, L. J. (1970). Essentials of psychological testing. New York, NY: Harper & Row. First citation in articleGoogle Scholar

  • Cronbach, L. J. , Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281–302. First citation in articleCrossrefGoogle Scholar

  • De Houwer, J. , Teige-Mocigemba, S. , Spruyt, A. , Moors, A. (2009). Implicit measures: A normative analysis and review. Psychological Bulletin, 135, 347–368. doi: 10.1037/a0014211 First citation in articleCrossrefGoogle Scholar

  • Dislich, F. X. R. , Imhoff, R. , Banse, R. , Altstötter-Gleich, C. , Zinkernagel, A. , Schmitt, M. (2012). Discrepancies between implicit and explicit self-concepts of intelligence predict performance on tests of intelligence. European Journal of Personality, 26, 212–220. First citation in articleCrossrefGoogle Scholar

  • Dislich, F. X. R. , Zinkernagel, A. , Ortner, T. M. , Schmitt, M. (2010). Convergence of direct, indirect, and objective risk taking measures in the domain of gambling: The moderating role of impulsiveness and self-control. Journal of Psychology, 218, 20–27. First citation in articleAbstractGoogle Scholar

  • Elliot, S. , Lawty-Jones, M. , Jackson, C. (1996). Effects of dissimulation on self-report and objective measures of personality. Personality and Individual Differences, 21, 335–343. First citation in articleCrossrefGoogle Scholar

  • Fazio, R. H. , Olson, M. A. (2003). Implicit measures in social cognition research: Their meaning and use. Annual Review of Psychology, 54, 297–327. doi: 10.1146/annurev.psych.54.101601.145225 First citation in articleCrossrefGoogle Scholar

  • Fitts, P. M. (1946). German applied psychology during World War II. American Psychologist, 1, 151–161. First citation in articleCrossrefGoogle Scholar

  • Friese, M. , Hofmann, W. , Schmitt, M. (2008). When and why do implicit measures predict behaviour? Empirical evidence for the moderating role of opportunity, motivation, and process reliance. European Review of Social Psychology, 19, 285–338. doi: 10.1080/10463280802556958 First citation in articleCrossrefGoogle Scholar

  • Gawronski, B. , Bodenhausen, G. V. (2006). Associative and propositional processes in evaluation: An integrative review of implicit and explicit attitude change. [Review]. Psychological Bulletin, 132, 692–731. First citation in articleCrossrefGoogle Scholar

  • Greenwald, A. G. , McGhee, D. E. , Schwartz, J. K. L. (1998). Measuring individual differences in implicit cognition: The implicit association test. Journal of Personality and Social Psychology, 74, 1464–1480. First citation in articleCrossrefGoogle Scholar

  • Greenwald, A. G. , Poehlman, T. A. , Uhlmann, E. L. , Banaji, M. R. (2009). Understanding and using the implicit association test: III Meta-analysis of predictive validity. Journal of Personality and Social Psychology, 97, 17–41. doi: 10.1037/a0015575 First citation in articleCrossrefGoogle Scholar

  • Gschwendner, T. , Hofmann, W. , Schmitt, M. (2008). Convergent and predictive validity of implicit and explicit anxiety measures as a function of specificity similarity and content similarity. European Journal of Psychological Assessment, 24, 254–262. doi: 10.1027/1015-5759.24.4.254 First citation in articleLinkGoogle Scholar

  • Häcker, H. , Schmidt, L. R. , Schwenkmezger, P. , Utz, H. E. (1975). Objektive Testbatterie, OA-TB 75 [Objective Test Battery, OA-TB 75]. Weinheim: Beltz. First citation in articleGoogle Scholar

  • Häusler, J. (2004). An algorithm for the separation of skill and working style. Psychology Science, 4, 433–450. First citation in articleGoogle Scholar

  • Hofmann, K. , Kubinger, K. D. (2001). Herkömmliche Persönlichkeitsfragebogen und Objektive Persönlichkeitstests im “Wettstreit” um Unverfälschbarkeit [Personality questionnaires and objective personality tests in contest: More or less fakeable?] Report Psychologie, 26, 298–304. First citation in articleGoogle Scholar

  • Hofmann, W. , Gawronski, B. , Gschwendner, T. , Le, H. , Schmitt, M. (2005). A meta-analysis on the correlation between the implicit association test and explicit self-report measures. Personality and Social Psychology Bulletin, 31, 1369–1385. doi: 10.1177/0146167205275613 First citation in articleCrossrefGoogle Scholar

  • Hofmann, W. , Gschwendner, T. , Nosek, B. , Schmitt, M. (2006). What moderates implicit-explicit consistency? European Review of Social Psychology, 16, 335–390. First citation in articleCrossrefGoogle Scholar

  • Hofmann, W. , Schmitt, M. (2008). Advances and challenges in the indirect measurement of individual differences at age 10 of the implicit association test. European Journal of Psychological Assessment, 24, 207–209. doi: 10.1027/1015-5759.24.4.207 First citation in articleLinkGoogle Scholar

  • Hundleby, J. D. , Pawlik, K. , Cattell, R. B. (1965). Personality factors in objective test devices. San Diego, CA: Knapp. First citation in articleGoogle Scholar

  • Jasper, F. , Ortner, T. M. (2014). The tendency to fall for distracting information while making judgments – development and validation of the Objective Heuristic Thinking Test. European Journal of Psychological Assessment 30, 193–207. doi: 10.1027/1015-5759/a000214 First citation in articleLinkGoogle Scholar

  • Koch, T. , Ortner, T. M. , Eid, M. , Caspers, J. , & Schmitt, M. (2014). Evaluating the construct validity of objective personality tests using a Multitrait-Multimethod-Multioccasion (MTMM-MO) approach. European Journal of Psychological Assessment, 30, 208–230. doi: 10.1027/1015-5759/a000212 First citation in articleLinkGoogle Scholar

  • Kubinger, K. D. (2009). The technique of objective personality-tests sensu R. B. Cattell nowadays: The Viennese pool of computerized tests aimed at experiment-based assessment of behavior. Acta Psychologica Sinica, 41, 1024–1036. First citation in articleCrossrefGoogle Scholar

  • Kubinger, K. D. , Ebenhöh, J. (1996). Arbeitshaltungen – Kurze Testbatterie: Anspruchsniveau, Frustrationstoleranz, Leistungsmotivation, Impulsivität/Reflexivität [Working Style – A short test-battery: Level of aspiration, achievement motivation, frustration tolerance, impulsiveness/reflexiveness][Software and Manual]. Frankfurt/M.: Swets Test Services. First citation in articleGoogle Scholar

  • Lejuez, C. W. , Aklin, W. M. , Jones, H. A. , Richards, J. B. , Strong, D. R. , Kahler, C. W. , Read, J. P. (2003). The Balloon Analogue Risk Task (BART) differentiates smokers and nonsmokers. Experimental and Clinical Psychopharmacology, 11, 26–33. doi: 10.1037/1064-1297.11.1.26 First citation in articleCrossrefGoogle Scholar

  • Lejuez, C. W. , Aklin, W. M. , Zvolensky, M. J. , Pedulla, C. M. (2003). Evaluation of the Balloon Analogue Risk Task (BART) as a predictor of adolescent real-world risk-taking behaviors. Journal of Adolescence, 26, 475–479. doi: 10.1016/S0140-1971(03)00036-8 First citation in articleCrossrefGoogle Scholar

  • Lejuez, C. W. , Read, J. P. , Kahler, C. W. , Richards, J. B. , Ramsey, S. E. , Stuart, G. L. , … Brown, R. A. (2002). Evaluation of a behavioral measure of risk taking: The Balloon Analogue Risk Task (BART). Journal of Experimental Psychology–Applied, 8, 75–84. doi: 10.1037//1076-898x.8.2.75 First citation in articleCrossrefGoogle Scholar

  • Lejuez, C. W. , Richards, J. B. , Read, J. P. , Kahler, C. W. , Ramsey, S. E. , Stuart, G. L. , … Brown, R. A. (2002). Evaluation of a behavioral measure of risk taking: The Balloon Analogue Risk Task (BART). Journal of Experimental Psychology – Applied, 8, 75–84. doi: 10.1037//1076-898X.8.2.75 First citation in articleCrossrefGoogle Scholar

  • Meyer, G. J. , Kurtz, J. E. (2006). Advancing personality assessment terminology: Time to retire “objective” and “projective” as personality test descriptors. [Editorial Material]. Journal of Personality Assessment, 87, 223–225. doi: 10.1207/s15327752jpa8703_01 First citation in articleCrossrefGoogle Scholar

  • Nosek, B. A. (2007). Implicit-explicit relations. Current Directions in Psychological Science, 16, 65–69. First citation in articleCrossrefGoogle Scholar

  • Ortner, T. M. (2012). Teachers’ burnout is related to lowered speed and lowered quality for demanding short-term tasks. Psychological Test and Assessment Modeling, 54, 20–35. First citation in articleGoogle Scholar

  • Ortner, T. M. , Horn, R. , Kersting, M. , Krumm, S. , Kubinger, K. D. , Proyer, R. T. , … Westhoff, K. (2007). Standortbestimmung und Zukunft Objektiver Persönlichkeitstests [Current Situation and Future of Objective Personality Tests]. Report Psychologie, 75. First citation in articleGoogle Scholar

  • Ortner, T. M. , Kubinger, K. D. , Schrott, A. , Radinger, R. , Litzenberger, M. (2006). Belastbarkeits-Assessment: Computerisierte Objektive Persönlichkeits-Testbatterie – Deutsch (BAcO-D) [Stress resistance assessment: Computerized objective test-battery – German version]. Test: Software und Manual Frankfurt/M.: Harcourt Assessment. First citation in articleGoogle Scholar

  • Ortner, T. M. , Proyer, R. T. , Kubinger, K. D. (2006). Theorie und Praxis Objektiver Persönlichkeitstests [Theory and praxis of objective personality tests]. Bern, Switzerland: Hans Huber. First citation in articleGoogle Scholar

  • Pawlik, K. (2006). Objektive Tests in der Persönlichkeitsforschung [Objective personality tests in personality research]. In T. M. Ortner, R. T. Proyer, K. D. Kubinger, (Eds.), Theorie und Praxis Objektiver Persönlichkeitstests [Theory and practice of objective personality tests] (pp. 16–23). Bern, Switzerland: Hans Huber. First citation in articleGoogle Scholar

  • Payne, B. K. , Cheng, C. M. , Govorun, O. , Steward, B. D. (2005). An inkblot for attitudes: Affect misattribution as implicit measurement. Journal of Personality and Social Psychology, 89, 277–293. First citation in articleCrossrefGoogle Scholar

  • Poorthuis, A. M. G. , Thomaes, S. , Denissen, J. A. A. , de Castro, B. O. , van Aken, M. A. G. (2014). Personality in action: Can brief behavioral personality tests predict children’s academic and social adjustment across the transition to secondary school? European Journal of Psychological Assessment, 30, 169–177. doi: 10.1027/1015-5759/a000186 First citation in articleLinkGoogle Scholar

  • Proyer, R. T. (2007). Convergence of conventional and behavior-based measures: Towards a multimethod approach in the assessment of vocational interests. Psychology Science Quarterly, 49, 168–183. First citation in articleGoogle Scholar

  • Proyer, R. T. , Häusler, J. (2007). Assessing behavior in standardized settings: The role of objective personality tests. International Journal of Clinical and Health Psychology, 7, 537–546. First citation in articleGoogle Scholar

  • Proyer, R. T. , Häusler, J. (2008). Multimethodische Objektive Interessentestbatterie (MOI) [Multi-methodic objective test battery]. Mödling, Austria: Schuhfried. First citation in articleGoogle Scholar

  • Quirin, M. , & Bode, R. C. (2014). Submerging the surface of conscious emotions: Using the IPANAT to measure implicit trait and state affect. European Journal of Psychological Assessment, 30, 231–237. doi: 10.1027/1015-5759/a000190 First citation in articleLinkGoogle Scholar

  • Rubio, V. J. , Hernández, J. M. , Zaldivar, F. , Marquez, O. , Santacreu, J. (2010). Can we predict risk-taking behavior? Two behavioral tests for predicting guessing tendencies in a multiple-choice test. European Journal of Psychological Assessment, 26, 87–94. doi: 10.1027/1015-5759/a000013 First citation in articleLinkGoogle Scholar

  • Santacreu, J. , Rubio, V. J. , Hernández, J. M. (2006). The objective assessment of personality: Cattells’s T-data revisited and more. Psychology Science, 48, 53–68. First citation in articleGoogle Scholar

  • Schmidt-Atzert, L. (2007). Objektiver Leistungsmotivations Test (OLMT) [Objective Achievement Motivation Test] [Software and Manual]. Mödling, Austria: Dr. G. Schuhfried GmbH. First citation in articleGoogle Scholar

  • Schmidt, L. R. (1975). Objektive Persönlichkeitsmessung in diagnostischer und klinischer Psychologie [Objective Measurement of Personality in Assesssment and Clinical Psychology]. Weinheim: Beltz. First citation in articleGoogle Scholar

  • Schmitt, M. (2006). Conceptual, theoretical, and historical foundations of multimethod assessment. In M. Eid, E. Diener, (Eds.), Handbook of multimethod measurement in psychology (pp. 9–25). New York, NY: American Psychological Association. First citation in articleCrossrefGoogle Scholar

  • Schmitt, M. (2009a). Moderated consistency between direct, indirect, and behavioral indicators of dispositions Retrieved from www.uni-landau.de/schmittmanfred/english/forschung/IAT/index.html First citation in articleGoogle Scholar

  • Schmitt, M. (2009b). Person x situation-interactions as moderators. Journal of Research in Personality, 43, 267. First citation in articleCrossrefGoogle Scholar

  • Schmitt, M. , Baumert, A. (2010). On the diversity of dynamic person x situation interactions. European Journal of Personality, 24, 497–500. First citation in articleGoogle Scholar

  • Schmitt, M. , Gollwitzer, M. , Baumert, A. , Blum, G. , Geschwendner, T. , Hofmann, W. , Rothmund, T. (2013). Proposal of a nonlinear interaction of person and situation (NIPS) model. Frontiers in Psychology in Personality Science and Individual Differences, 4, 499. First citation in articleGoogle Scholar

  • Schmitt, M. , Hofmann, W. , Gschwendner, T. , Gerstenberg, F. X. , Zinkernagel, A. (in press). A model of moderated convergence between direct, indirect, and behavioral measures of personality traits. In A. J. R. Van de Vijver, T. M. Ortner, (Eds.), Behavior Based Assessment: Going beyond Self Report in the Personality, Affective, Motivation, and Social Domains. Göttingen: Hogrefe. First citation in articleGoogle Scholar

  • Steyer, R. , Schmitt, M. , Eid, M. (1999). Latent state-trait theory and research in personality and individual differences. European Journal of Personality, 13, 389–408. First citation in articleCrossrefGoogle Scholar

  • Strack, F. , & Deutsch, R. (2004). Reflective and impulsive determinants of social behavior. Personality and Social Psychology Review, 8, 220–247. First citation in articleCrossrefGoogle Scholar

  • Wilson, T. D. , Lindsey, S. , Schooler, T. Y. (2000). A model of dual attitudes. Psychological Review, 107, 101–126. First citation in articleCrossrefGoogle Scholar

  • Ziegler, M. , Schmidt-Atzert, L. , Bühner, M. , Krumm, S. (2007). Fakability of different measurement methods for achievement motivation: Questionnaire, semi-projective, and objective. Psychology Science, 49, 291–307. First citation in articleGoogle Scholar

  • Ziegler, M. , Schmukle, S. C. , Egloff, B. , & Bühner, M. (2010). Investigating Measures of Achievement Motivation(s). Journal of Individual Differences, 31, 15–21. First citation in articleLinkGoogle Scholar

Tuulia M. Ortner, Department of Psychology, Division for Psychological Assessment, University of Salzburg, Hellbrunnerstrasse 34, 5020 Salzburg, Austria, +43 662 8044-5181, +43 662 8044-5126,