Elsevier

Computers in Human Behavior

Volume 27, Issue 6, November 2011, Pages 2386-2391
Computers in Human Behavior

Does online psychological test administration facilitate faking?

https://doi.org/10.1016/j.chb.2011.08.001Get rights and content

Abstract

This study examined for the first time the effect of delivery mode on faking good and faking bad in psychological testing. Participants (N = 223) completed questionnaires either online or in pen-and-paper format in a mixed experimental design. After completing measures of personality (HEXACO-60, Ashton & Lee, 2009) and depression (DASS-21, Lovibond & Lovibond, 1995) under standard instructions, participants then faked the personality measure as if applying for a job, and faked the depression measure as if experiencing severe depression. Equivalence of internet and pen-and paper-administration on faking was then measured between groups. As predicted, participants were able to fake good on the HEXACO-60 and to fake bad on the DASS-21. Also as predicted, there were no significant differences in faked scores as a function of test administration mode. Further, examination of effect sizes confirmed that the influence of test administration mode was small. It was concluded that online and pen-and paper presentation are largely equivalent when an individual is faking responses in psychological testing. Given the advantages of online assessment and the importance of valid psychological testing, future research should investigate whether the current findings can be generalised to other faking and malingering scenarios and to other psychological measures.

Highlights

► We examined whether online or traditional test administration influences fakability. ► Administration mode did not influence scores when faking good. ► Administration mode did not influence scores when faking bad. ► Online and pen-and paper presentation appear equivalent when an individual is faking. ► Future research should investigate other measures and faking scenarios.

Introduction

The internet is being increasingly used for psychological research and assessment in a number of contexts, including for vocational (Piotrowski & Armstrong, 2006) and clinical (Hedman et al., 2010) purposes. Much research has examined the equivalence of pen-and-paper and web-based versions of specific psychological measures in a variety of domains (e.g. Coles et al., 2007, Denniston et al., 2010, Hedman et al., 2010, Lewis et al., 2009; Templer & Lange, 2008). However, the equivalence of psychological test presentation via the internet and pen-and-paper has not been examined in regards to the susceptibility of self-reports to faking. This study aimed to explore for the first time the influence that mode of delivery elicits on the fakability of self-report psychological tests.

The internet has rapidly become a valuable medium for data collection as it is considered inexpensive, easily accessible, and discrete (Birnbaum, 2004). Other advantages of on-line data collection are that participants can be required to endorse answers to all items (thereby minimising missing data), and that data can be transferred electronically for analysis (thereby reducing data entry error) (Carlbring et al., 2007; Lewis et al., 2009). However, rather than assuming that internet and pen-and-paper administrations of psychological measures are interchangeable, it has been recommended that all psychological measures be evaluated to investigate whether internet and pen-and-paper administrations are comparable (Buchanan, 2002).

To date, a number of tests have been compared, including clinical measures (e.g. Carlbring et al., 2007, Coles et al., 2007, Herrero and Meneses, 2006), personality measures (e.g. Templer & Lange, 2008), ability measures (e.g. Ihme et al., 2009), and health- and risk-related behaviour measures (e.g. Horswill and Coster, 2001, Lewis et al., 2009, McCabe et al., 2006, Whittier et al., 2004). Overall, findings suggest that the internet is both a feasible and largely comparable method for conducting psychological testing. However, no extant research has examined the role of mode of delivery in the administration of self-report psychometric tests and the potential facilitation of faking behaviour.

Faking or malingering occurs when an individual strategically alters their self representation in a particular test (Grieve & Mahar, 2010). Faking good is characterised by responses that augment an individual’s actual state, making them appear psychologically superior (for example, in a job application), while faking bad occurs when an individual presents themselves as psychologically worse than they actually are (for example, to be diagnosed with a disorder).

Faking of psychological assessments may have a number of consequences. For example, in vocational contexts, faking will not only influence who gets hired (Mueller-Hanson, Heggestad, & Thornton, 2003), but can also impact the subsequent training and management of employees (Landers, Sackett, & Tuzinski, 2011). In clinical contexts, faking may influence access to therapy or medication (Suhr, Hammers, Dobbins-Buckland, Zimak, & Hughes, 2008).

This study aimed to build on the existing research regarding the validity of pen-and-paper and online testing methods by investigating whether administration mode influences an individual’s ability to fake a measure. To more fully address this aim, both faking good and faking bad scenarios were employed.

Previous research has shown that individuals are readily able to fake good in vocational contexts (for example, as if applying for a job) by maximising positive, job-relevant personality aspects and minimising negative personality aspects (Mahar et al., 2006). Therefore, an initial hypothesis was that participants would be able to alter their original personality profiles to a more positive faked profile when asked to complete a personality measure as if they were applying for a job. Specifically, it was anticipated that the faked profiles would score significantly higher than the original profiles on desirable employee characteristics (honesty/humility, extraversion, agreeableness, conscientiousness, and openness), and significantly lower on undesirable employee characteristics (emotionality).

The second hypothesis addressed the main research question. Given that most research into the equivalence of online and pen-and-paper personality testing has found that both modes of administration elicit similar test results (e.g. Templer & Lange, 2008), it was hypothesised that faked profile scores would be equivalent regardless of which mode of administration was used. While it is acknowledged that this is in fact testing the null hypothesis, and that it is difficult to ascertain whether a hypothesis of no difference is true (Nickerson, 2000), a hypothesis of this nature was required by the research question. It follows that, in order to test the second hypothesis, a close examination of effect size, rather than statistically significant differences alone, was indicated.

It has also been shown that individuals are able to fake bad in clinical contexts (for example, as if they have depression, see Grieve & Mahar, 2010). Therefore, it was hypothesised that when participants were asked to complete a depression measure as if they had depression, they would be able to alter their original scores on that measure to faked scores suggesting a provisional diagnosis of depression.

In order to address the main research question, scores were compared between groups of participants who faked the depression measure either online or using pen-and-paper. Again, as previous research has largely supported the equivalence of the two modes of administration for clinical measures (e.g. Carlbring et al., 2007), it was hypothesised that there would be no significant differences in faked depression scores as a function of administration method. Once more, as this prediction was testing the null hypothesis, close examination of the effect size was also undertaken.

Section snippets

Participants

The sample consisted of 223 participants (54 men, 169 women) who completed the questionnaire on the internet (63%) or on paper (37%). Participants were recruited from the student body of an Australian university (41.5%), and the general public (51.8%). 6.7% of participants did not report whether or not they were students. Participants were invited to participate via in-class announcements, word of mouth, and using the social networking website Facebook. Participation was voluntary and no

Manipulation check

Answers to the manipulation check were reviewed to ensure that participants had understood and followed the experimental manipulation, and were dummy coded as either ‘followed instructions’ or ‘did not follow instructions’. Examples of responses coded as successfully facilitated were “to bend the truth or lie straight out to get the job” and “trying to answer like a depressed person would”. Examples of responses coded as not following instructions included “to answer honestly and openly” and

Discussion

The aim of this study was to examine whether internet and pen-and-paper test administrations are equally susceptible to faking for measures of personality and depression. As predicted, participants were able to alter their test profiles to present themselves as a desirable employee, and also to appear as if a provisional diagnosis of depression was indicated. When faking good, emotionality tended to be under-reported, while honesty/humility, extraversion, agreeableness, conscientiousness and

Conclusions

This research examined for the first time the fakability of two measures (the HEXACO-60 and the DASS-21) as a function of test administration mode. The small effect sizes indicated that internet administration and pen-and-paper administration are largely equivalent when an individual is engaging in faking behaviours on these specific self-report measures of personality (faking good) and depression (faking bad). While future research should extend investigation to other contexts and measures,

Acknowledgements

The authors would like to thank Catherine McSwiggan for her assistance in data collection. We would also like to thank two anonymous reviewers for their useful comments.

References (28)

  • M. Ashton et al.

    The HEXACO-60: A short measure of the major dimensions of personality

    Journal of Personality Assessment

    (2009)
  • M. Birnbaum

    Human research and data collection via the Internet

    Annual Review of Psychology

    (2004)
  • M.C.W. Braver et al.

    Statistical treatment of the Solomon Four Group Design: A meta-analytic approach

    Psychological Bulletin

    (1988)
  • T. Buchanan

    Online assessment: Desirable or dangerous?

    Professional Psychology: Research and Practice

    (2002)
  • Cited by (26)

    • HEXACO personality predicts counterproductive work behavior and organizational citizenship behavior in low-stakes and job applicant contexts

      2018, Journal of Research in Personality
      Citation Excerpt :

      A meta-analysis of instructed faking studies found average changes of around three-quarters of a standard deviation (Ones, Viswesvaran, & Schmidt, 1993). In addition to studies of instructed faking on HEXACO personality in the laboratory environment (Grieve & De Groot, 2011; MacCann, 2013), a recent study by Anglim, Morse, De Vries, MacCann, and Marty (2017) compared large samples of job applicants and non-applicants on the HEXACO-PI-R and found that job applicants scored higher on honesty-humility, extraversion, agreeableness, and conscientiousness. Although there is relative consensus that response distortion occurs in applicant settings, there is less agreement about whether such settings also reduce the predictive validity of personality.

    • Employment testing online, offline, and over the phone: Implications for e-assessment

      2016, Revista de Psicologia del Trabajo y de las Organizaciones
      Citation Excerpt :

      Previous research does indicate that when faking, individuals form their own concept or schema of the desirable profile (Jansen, König, Kleinmann, & Melchers, 2012). The current study built on previous research (Grieve & de Groot, 2011) examining the equivalence of electronic assessment methods in a vocational context by including telephone administration and a specific applicant profile. Overall, the results support previous research indicating equivalence between online and pen-and-paper test administration (e.g., Bates & Cox, 2008; Carlbring et al., 2007; Casler et al., 2013; Williams & McCord, 2006), and between online, pen-and-paper, and telephone administration (Knapp & Kirk, 2003).

    • Response Bias, Malingering, and Impression Management

      2015, Measures of Personality and Social Psychological Constructs
    • Predicting intentions to fake in psychological testing: Which normative beliefs are important?

      2014, Revista de Psicologia del Trabajo y de las Organizaciones
    • More of a (wo)man offline? Gender roles measured in online and offline environments

      2013, Personality and Individual Differences
      Citation Excerpt :

      Researchers have generally found that online versions of traditional pen and paper tests are equivalent (Naus et al., 2009). Online and offline equivalence has been examined and established in a range of assessments, for example personality (Chuah et al., 2006; Grieve & de Groot, 2011; Salgado & Moscoso, 2003), clinical (Coles, Cook, & Blake, 2007; Grieve & de Groot, 2011; Naus et al., 2009) and intelligence (Franzis & Helge, 2003) measures. However, although many measures have been deemed equivalent, Buchanan (2002) stated that the equivalence of online and offline tests cannot just be assumed, and recommended that equivalence should be demonstrated for every test.

    View all citing articles on Scopus
    View full text