ABSTRACT
User studies are important for many aspects of the design process and involve techniques ranging from informal surveys to rigorous laboratory studies. However, the costs involved in engaging users often requires practitioners to trade off between sample size, time requirements, and monetary costs. Micro-task markets, such as Amazon's Mechanical Turk, offer a potential paradigm for engaging a large number of users for low time and monetary costs. Here we investigate the utility of a micro-task market for collecting user measurements, and discuss design considerations for developing remote micro user evaluation tasks. Although micro-task markets have great potential for rapidly collecting user measurements at low costs, we found that special care is needed in formulating tasks in order to harness the capabilities of the approach.
- Benkler, Y. 2002. Coase's penguin,or Linux and the nature of the firm. Yale Law Journal 112, 367--445.Google ScholarCross Ref
- Andreasen, M. S., Nielsen, H. V., Schrøder, S. O., and Stage, J. 2007. What happened to remote usability testing? An empirical study of three methods. In Proc. of CHI 2007. ACM Press, New York, NY, 1405--1414. Google ScholarDigital Library
- Fogg, B.J., Marshall, J., Kameda, T., Solomon, J., Rangnekar, A., Boyd, J., and Brown, B. 2001. Web credibility research: a method for online experiments and early study results. In Proc. CHI '01 Extended Abstracts. ACM Press, New York, NY, 295--296. Google ScholarDigital Library
- Kittur, A., Suh, B., Pendleton, B., Chi, E.H. 2007. He Says, She Says: Conflict and Coordination in Wikipedia. In Proc. of CHI 2007, pp. 453--462. ACM Press. Google ScholarDigital Library
- Spool, J, & Schroeder, W. 2001. Test web sites: five users is nowhere near enough. In Proc. CHI2001 Extended Abstracts. Seattle: ACM Press. 285--286. Google ScholarDigital Library
- Viégas, F.B., Wattenberg, M., and McKeon, M. 2007. The Hidden Order of Wikipedia. In Proc. of HCI Interactional Conference. Google ScholarDigital Library
- Wikipedia. Featured Article Criteria. http://en.wikipedia.org/wiki/Wikipedia:Featured_article_criteria, accessed Sep, 2007.Google Scholar
Index Terms
- Crowdsourcing user studies with Mechanical Turk
Recommendations
Evaluating the accessibility of crowdsourcing tasks on Amazon's mechanical turk
ASSETS '14: Proceedings of the 16th international ACM SIGACCESS conference on Computers & accessibilityCrowd work web sites such as Amazon Mechanical Turk enable individuals to work from home, which may be useful for people with disabilities. However, the web sites for finding and performing crowd work tasks must be accessible if people with disabilities ...
Pushing the limits of mechanical turk: qualifying the crowd for video geo-location
CrowdMM '12: Proceedings of the ACM multimedia 2012 workshop on Crowdsourcing for multimediaIn this article we review the methods we have developed for finding Mechanical Turk participants for the manual annotation of the geo-location of random videos from the web. We require high quality annotations for this project, as we are attempting to ...
Crowdsourcing Similarity Judgments for Agreement Analysis in End-User Elicitation Studies
UIST '18: Proceedings of the 31st Annual ACM Symposium on User Interface Software and TechnologyEnd-user elicitation studies are a popular design method, but their data require substantial time and effort to analyze. In this paper, we present Crowdsensus, a crowd-powered tool that enables researchers to efficiently analyze the results of ...
Comments