Review
Crowdsourcing Samples in Cognitive Science

https://doi.org/10.1016/j.tics.2017.06.007Get rights and content
Under a Creative Commons license
open access

Trends

In the next few years we estimate nearly half of all cognitive science research articles will involve samples of participants from Amazon Mechanical Turk and other crowdsourcing platforms.

We review the technical aspects of programming for the web and the resources available to experimenters.

Crowdsourcing of participants offers a ready and very different complement to the traditional college student samples, and much is now known about the reproducibility of findings with crowdsourced samples.

The population which we are sampling from is surprisingly small and highly experienced in cognitive science experiments, and this non-naïveté affects responses to frequently used measures.

The larger sample sizes that crowdsourcing affords bode well for addressing aspects of the replication crisis, but a possible tragedy of the commons looms now that cognitive scientists increasingly share the same participants.

Crowdsourcing data collection from research participants recruited from online labor markets is now common in cognitive science. We review who is in the crowd and who can be reached by the average laboratory. We discuss reproducibility and review some recent methodological innovations for online experiments. We consider the design of research studies and arising ethical issues. We review how to code experiments for the web, what is known about video and audio presentation, and the measurement of reaction times. We close with comments about the high levels of experience of many participants and an emerging tragedy of the commons.

Cited by (0)