Skip to main content
Open AccessOriginal Article

On the (Mis)Use of Deception in Web-Based Research

Challenges and Recommendations

Published Online:https://doi.org/10.1027/2151-2604/a000466

Abstract

Abstract. The deception of research participants remains a controversial issue in the behavioral sciences. Current ethics codes consistently limit the use of deception to cases in which non-deceptive alternatives are unfeasible and, crucially, require that participants subjected to deception be debriefed correspondingly along with an option to withdraw their data after learning about the deception. These conditions pose a particular challenge in the context of web-based research because participants can typically discontinue a study unilaterally (i.e., dropout by simply closing the browser window) in which case full debriefing and an option to withdraw one’s data are no longer available. As a consequence, the study would no longer be compatible with ethical standards. Based on recent meta-analytical data, we provide an existence proof of this problem, showing that deception is used in web-based research with little to no indication of safeguards ensuring full debriefing and subsequent data withdrawal options. We close by revisiting recommendations for the (non-)use of deception in web-based research and offer solutions to implement such safeguards in case deception is truly unavoidable.

Despite decades of normative controversies (Baumrind, 1964, 1985; Hertwig & Ortmann, 2008a), deception of research participants remains a hot-button issue in the behavioral sciences. In line with common practice, deception is herein understood as an act of commission rather than omission (Ortmann, 2019), reflecting the “consensus [that] has emerged across disciplinary borders that intentional and explicit provision of erroneous information – in other words, lying – is deception, whereas withholding information about research hypotheses, the range of experimental manipulations, or the like ought not to count as deception” (Hertwig & Ortmann, 2008b, p. 222). As early as the mid-1960s, deception was an integral part of psychological research (Stricker, 1967), and there is no indication that its prevalence has changed much since (Hertwig & Ortmann, 2001; Kimmel, 2001; Smith et al., 2009). In fact, even today, researchers are being advised and encouraged to deceive participants still more effectively (Olson & Raz, 2021).

One recurring explanation for the continued reliance on deception is that it can be necessary – for example, to uphold validity in the face of possible demand effects or to avoid still more serious ethical breaches such as harming another individual (rather than only claiming that this is the case). Crucially, however, even those arguing that deception cannot be banned altogether (Bortolotti & Mameli, 2006; Bröder, 1998; Christensen, 1988; Cook & Yamagishi, 2008; Pittenger, 2002) consistently acknowledge that it must be a very well-justified last resort (Kimmel, 2011; Kimmel et al., 2011).

In line with this position, current ethics codes of psychological societies do not rule out deception entirely, but (a) strictly limit its use to studies of fundamental importance in which deception is unavoidable and (b) require particular measures be taken, that is, further safeguards to be put in place. Specifically, deception is considered acceptable if – and only if – “the use of deceptive techniques is justified by the study’s significant prospective scientific, educational, or applied value and … effective non-deceptive alternative procedures are not feasible” and under the condition that researchers “explain any deception … to participants as early as is feasible, preferably after their participation, but no later than after the data collection, and permit participants to withdraw their data” (American Psychological Association, 2017, section 8.07). Of note, other psychological societies take a similar stand, as, for example, the European Federation of Psychologists Associations (see section 3.4 of their Meta-Code of Ethics).

Thus, however strict or lenient one may be in determining whether a study is of outstanding value, it is undeniable that – per the ethics codes of our field – deception is reserved for the few necessary exceptions and requires additional precautions. Contrary to the first condition, however, deception is neither a rare exception (see above) nor is its use typically justified by the absence of feasible, non-deceptive alternatives (Hilbig et al., in press). The additional precautions, in turn, pose a particular challenge for the context of web-based research and will thus be our main focus in what follows.

Deception in Web-Based Research

As was essentially already pointed out more than two decades ago (Frankel & Siang, 1999), full debriefing and subsequent data withdrawal can be particularly difficult to accomplish in web-based research: Participants can typically discontinue a study unilaterally (i.e., dropout by simply closing the browser window), in which case both full debriefing and subsequent data withdrawal may simply become impossible to ensure (Barchard & Williams, 2008; Nosek et al., 2002; Skitka & Sargis, 2006). Consequently, a study – even one in which deception was unavoidable – would no longer be compatible with ethical standards. Note that, even if one were to discard all incomplete datasets (and thereby solve the data deletion problem), one is still faced with the challenge to ensure full debriefing of all participants who encountered deception in a study. Exactly because immediate debriefing after study completion cannot be ensured due to the possibility of participants spontaneously dropping out (Reips, 2002), it has been emphatically recommended that researchers “avoid deception in any Web-based research, at all costs” (Reips & Birnbaum 2011, p. 582, emphases added).

Given the above, one would expect that (a) deception is particularly uncommon in web-based research and especially (b) only used if both full debriefing and options for data withdrawal can be ensured (to a reasonable extent) for all participants. Some early indication of the prevalence of deception in web-based research (and, in part, the issue of debriefing) was already provided in 2006 by Skitka and Sargis, who found “reasons to be concerned about the responsible use… of Internet-based research” (p. 550). In their review of web-based studies published in APA journals from 2003 to 2004 or obtained through listserv requests, they identified 17% of studies that had used deception, half of which provided no information on whether or how debriefing had been achieved. However, much (actually most) web-based research has been conducted since Skitka and Sargis (2006), given the explosive increase of web-based studies in the past decade (Krantz & Reips, 2017) and the MTurkification of psychology (Anderson et al., 2019). Arguably, most or even all of these more recent studies may have conformed to the above expectations regarding the use of deception and corresponding debriefing. Moreover, Skitka and Sargis (2006) did not specifically code for information on data withdrawal options, an issue we will additionally consider herein.

To test empirically whether deception in web-based research is consistently accompanied by safeguards to ensure full debriefing and data withdrawal, we reviewed studies from a lively, inter-disciplinary field of research: individual differences in prosocial behavior. One reason for this choice was that we could rely on a recent authoritative meta-analysis (Thielmann et al., 2020) including 770 studies for which it had already been coded whether studies were conducted online and whether they used deception – prior to and independent of the present investigation. Another advantage of this research area is that it spans several subfields within psychology (e.g., cognitive, personality, and social) and beyond (e.g., economics or neuroscience) and is featured prominently in the most renowned journals in psychology (e.g., Journal of Experimental Psychology: General, Journal of Personality and Social Psychology) and beyond (e.g., Evolution and Human Behavior, Journal of Conflict Resolution). As such, the studies underlying this meta-analysis can be considered to represent a substantively broad range of purportedly “top-notch” behavioral science. Note that, as an upshot of this selection, many studies used deception needlessly, that is, despite feasible non-deceptive alternatives (Hilbig et al., in press). Thus, many can be considered to be hanging by a thin ethical thread which is why one would especially expect that researchers went to any lengths to ensure full debriefing and options for data withdrawal.

Data and Coding

The meta-analysis by Thielmann et al. (2020) identified 266 studies (i.e., 35%) that used deception, defined as actively providing false information to participants as per the common consensus reviewed above. Out of these, we identified 42 (i.e., 16%) online studies published as part of a journal article. We only included published studies given that the APA ethics code specifically refers to published research. The data for the overall meta-analysis is publicly available at the Open Science Framework (OSF; https://osf.io/dbuk6/). For the 42 identified studies, we coded whether any measures were reported that ensured that all participants (including those who dropped out prematurely) were (a) debriefed and (b) informed about data withdrawal options. Specifically, a research assistant first coded whether any corresponding information was provided in the respective publication for each study. Although for some studies it was reported that participants were debriefed at the end of the survey or after completing the task in which deception was used, for none of the studies was there any indication that specific safeguards were implemented that ensured debriefing of participants who dropped out prematurely.

However, given that it may well be that such measures were implemented but not reported in the respective publication, we next contacted the corresponding authors of all identified articles and asked them to answer the following questions: (1) “Did you ensure that all participants, including those who prematurely dropped out from participation, were debriefed?”; (2) “Did you ensure that all participants, including those who prematurely dropped out from participation, could request their data be deleted?”; and (3) “If yes for either of these, how did you ensure it technically?”. A total of 56% of authors contacted replied to our request within 6 weeks, and all who replied provided corresponding information. To avoid any finger-pointing, we deliberately refrain from including any information about the authors we contacted and who responded to our request.

Finally, we coded for each paper included in the meta-analysis whether (a) it was published in a psychology journal, and/or (b) at least one of the authors is a psychologist (by their PhD or affiliation at a psychology department). For all articles included, at least one of these criteria was satisfied and typically both. Specifically, 83% were published in psychology journals, 94% included at least one psychologist as an author, and 76% fulfilled both criteria. In light of these numbers, we maintain that it is appropriate to review these articles under ethical codes designed for the field of psychology.

Results and Discussion

As noted above, around 16% of studies identified by Thielmann et al. (2020) as having used deception were conducted via the Internet. In turn, the 42 studies using deception represent 27% of all (157) studies conducted via the Internet within Thielmann et al.’s (2020) dataset. As such, our estimate of the prevalence of deception in online studies is comparable to the one (17%) identified by Skitka and Sargis (2006), despite the very different approaches used (they sampled from journals, we sampled from a substantive area of research). As these numbers confirm, deception in online studies is (still) common practice.

More strikingly, however, we also found no evidence indicating that these studies implemented safeguards to ensure full debriefing and/or data withdrawal options. Specifically, although we found information that participants were debriefed in 14 of the 42 studies (33%), debriefing was generally presented only after completion of the task(s) in question or the entire survey, thus failing to ensure debriefing for participants who dropped out prematurely (but who may have nonetheless been faced with deception). This was also confirmed by all original study authors who replied to our request: Participants who dropped out were not debriefed.

In turn, we did not find explicit information on when or how participants were informed about their data withdrawal options for any of the studies. Although several authors reported in their responses to our request that participants were informed about the option to withdraw their data in the informed consent at the beginning of the study, in any of the studies were any specific safeguards implemented to ensure that participants could withdraw their data after having learned about the deception used in the study – let alone if they dropped out.

In summary, the findings reveal a notable portion of web-based studies using deception while none (to the best of our knowledge) implemented reliable safeguards ensuring full debriefing and/or a subsequent data withdrawal option for participants who may have dropped out prematurely. In our understanding, such studies constitute violations of our field’s ethical standards. Moreover, given that no indication of such safeguards was provided (or available upon request from the original authors), it is astonishing how many research ethics committees and journal editors must either have been unaware of this issue or indeed turned a blind eye. As an upshot, a sufficiently large number of journal editors appear to be content with publishing web-based studies using deception without even a word on how researchers ensured they were within our field’s ethical boundaries. This is particularly astounding given that many journals (e.g., those published by the APA) explicitly require authors to confirm that they conformed to the association’s ethical guidelines.

It is surprising, to say the least, that researchers are currently expected to explicitly justify even the most arbitrary choices (e.g., data exclusions),1 but not deviations from ethical standards. In times replete with talking about improving our science, how are we so blasé about the few, fundamental rules that constitute “a bedrock of the profession” (Joyce & Rankin, 2010, p. 466)? Arguably, such practice may have detrimental effects on the reputation of behavioral research with unknown and potentially severe consequences for the entire field of psychology (Birnbaum, 2004; Reips & Birnbaum, 2011). Indeed, even if one were to dismiss the practices uncovered here as limited to only one area of research – despite the fact that the reviewed research stems from a wide variety of (sub-)field –, it would be naive to hope that the reputational externalities do not extend to the wider field.

Remedies and Recommendations

In a final step, we discuss possible remedies to the situation identified above and derive recommendations for web-based research. Most importantly, by far, we cannot stress enough that the most reliable and absolute solution to all of the problems and pitfalls mentioned herein is to refrain from using deception entirely (in web-based research or otherwise). However, although we would prefer to leave it at that, we realize that some researchers may not consider this a viable option. For those who continue to believe that (their) research necessitates deception, we must first reiterate that our ethics codes require deception (a) be justified by the study’s exceptional value and absence of non-deceptive alternatives and (b) only ever be implemented with specific safeguards ensuring full debriefing of all participants and options for data withdrawal (after debriefing).

To implement such safeguards in web-based research, a few suggestions have been made (e.g., Barchard & Williams, 2008; Nosek et al., 2002), and we will briefly revisit these here. First off, some have recommended implementing a “leave the study” button or link on every study page that would take participants to a debriefing. Although cost-efficient, the success of this safeguard is entirely dependent on participants’ compliance, meaning that practically all who wish to drop out must do so via this button or link rather than simply closing the browser window. It would thus seem risky at best and therefore arguably ill-suited as a literal safeguard. A potentially safer option may be to ensure the automatic presentation of debriefing information upon closing the browser window, as would, for example, be possible through client-side programming. However, this will require that participants’ browsers allow the execution of such scripts, which cannot be taken for granted. At the very least, one would have to additionally ensure that the required scripts are not blocked upon starting the study (e.g., by testing whether JavaScript is enabled).

A potentially more promising approach, in our view, is to independently collect contact information of all participants which can be used to disseminate debriefing to all participants, including those who dropped out prematurely, and ensure that data withdrawal remains possible independent of whether and when dropout occurred. Specifically, one may first ask participants to register for the study, requiring them to provide their email address that is stored separately from the study data. Then, an email can be sent to all (potential) participants with two links: one for participation in the actual study and another for (later) data withdrawal. Both links would contain a unique random (non-identifying) code per participant, but email addresses and codes would never be stored together. Essentially, each participant receives a unique random code via email, which is used both for study participation and data withdrawal (thus ensuring that the data of a specific participant can later be deleted); that way, participants need not personally contact researchers, thus preserving full anonymity even upon data withdrawal. Nonetheless, all participants who potentially started the study can be fully debriefed, regardless of whether they completed it or dropped out prematurely.

Of note, the latter approach, too, comes with potential limitations to be considered. For one, some loss of (subjective) anonymity may result from collecting personal information (such as email addresses). Indeed, data protection legislation may explicitly discourage or even prohibit the collection of personal data, despite separate storage without the possibility of matching responses and personal information. Moreover, it is possible that platforms via which participants are recruited (e.g., panel providers or crowdsourcing platforms) prohibit the collection of contact information. However, in these cases, it should typically be possible to contact all participants (who started the study – including those who dropped out prematurely) post hoc via the corresponding platform and based on their individual IDs.

Conclusion

As per our field’s ethics codes, deception without a guarantee of debriefing and options for data withdrawal is out of the question – an inescapable fact that is either not widely known or, worse yet, regularly ignored in web-based research. As though deception of research participants was not bad enough for a profession that considers honesty one of its fundamental guiding principles (Francis, 2009), and even ignoring the more than worrisome observation that most studies relying on deception appear to do so needlessly and thus unethically (Hilbig et al., in press), web-based research is on route to becoming the poisonous case among the bad apples. We urge all those engaged in web-based research to turn this ship around and abandon deception, or at least firmly abide by the ethics code – ensuring complete debriefing and data withdrawal – whenever they (believe they) cannot do without deception.

We thank Alicia Seidl for assistance in the coding of studies.

1We do not mean arbitrary to imply inconsequential or irrelevant, nor do we question that every step taken to minimize p-hacking is worthwhile. However, data exclusion criteria are many and varied and certainly not subject to common rules or standards (the violation of which may undermine public trust in an entire profession) – unlike our field’s ethical guidelines.

References