ABSTRACT
Services like Amazon's Mechanical Turk have opened the door for exploration of processes that outsource computation to humans. These human computation processes hold tremendous potential to solve a variety of problems in novel and interesting ways. However, we are only just beginning to understand how to design such processes. This paper explores two basic approaches: one where workers work alone in parallel and one where workers iteratively build on each other's work. We present a series of experiments exploring tradeoffs between each approach in several problem domains: writing, brainstorming, and transcription. In each of our experiments, iteration increases the average quality of responses. The increase is statistically significant in writing and brainstorming. However, in brainstorming and transcription, it is not clear that iteration is the best overall approach, in part because both of these tasks benefit from a high variability of responses, which is more prevalent in the parallel process. Also, poor guesses in the transcription task can lead subsequent workers astray.
- von Ahn, L. Games With A Purpose. IEEE Computer Magazine, June 2006. Pages 96--98. Google ScholarDigital Library
- von Ahn, L., and Dabbish, L. Labeling Images with a Computer Game. ACM Conference on Human Factors in Computing Systems, CHI 2004. Pages 319--326. Google ScholarDigital Library
- von Ahn, L., Ginosar, S., Kedia, M., and Blum, M. Improving Accessibility of the Web with a Computer Game. ACM Conference on Human Factors in Computing Systems, CHI Notes 2006. pp 79--82. Google ScholarDigital Library
- von Ahn, L., Maurer, B., McMillen, C., Abraham, D. and Blum, M. reCAPTCHA: Human-Based Character Recognition via Web Security Measures. Science, September 12, 2008. pp 1465--1468.Google Scholar
- Bryant, S. L., Forte, A. and Bruckman, A. Becoming Wikipedian: transformation of participation in a collaborative online encyclopedia. GROUP 2005. Google ScholarDigital Library
- Dai, P., Mausam, Weld, D. S. Decision-Theoretic Control of Crowd-Sourced Workflows. AAAI 2010.Google Scholar
- Heer, J., Bostock, M. Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visualization Design. CHI 2010. Google ScholarDigital Library
- Kittur, A., Chi, E. H., and Suh, B. 2008. Crowdsourcing user studies with MTurk. CHI 2008. Google ScholarDigital Library
- Kittur, A. and Kraut, R. E. 2008. Harnessing the wisdom of crowds in wikipedia: quality through coordination. CSCW '08. ACM, New York, NY, 37--46 Google ScholarDigital Library
- Kosorukoff A. Human based genetic algorithm. IlliGAL report no. 2001004. 2001, University of Illinois, Urbana-Champaign.Google Scholar
- Little, G., Chilton, L. B., Goldman, M., and Miller, R. C. TurKit: Tools for Iterative Tasks on Mechanical Turk. HCOMP 2009. Google ScholarDigital Library
- Malone, T. W., Laubacher, R. and Dellarocas, C. Harnessing Crowds: Mapping the Genome of Collective Intelligence. MIT, Cambridge, 2009.Google ScholarCross Ref
- Mason, W., Watts, D. J. Financial Incentives and the "Performance of Crowds". HCOMP 2009. Google ScholarDigital Library
- Quinn, A. J., Bederson, B. B. A Taxonomy of Distributed Human Computation. Technical Report HCIL-2009-23 (University of Maryland, College Park, 2009).Google Scholar
- Snow, R., O'Connor, B., Jurafsky, D., and Ng, A. Y. Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks. EMNLP 2008. Google ScholarDigital Library
- Sorokin, A. and D. Forsyth. Utility data annotation with Amazon MTurk. Computer Vision and Pattern Recognition Workshops, Jan 2008.Google ScholarCross Ref
- Taylor, D. W., Berry, P. C. and Block, C. H. Does Group Participation When Using Brainstorming Facilitate or Inhibit Creative Thinking? Administrative Science Quarterly, Vol. 3, No. 1 (Jun., 1958), pp. 23--47.Google ScholarCross Ref
Index Terms
- Exploring iterative and parallel human computation processes
Recommendations
Exploring iterative and parallel human computation processes
CHI EA '10: CHI '10 Extended Abstracts on Human Factors in Computing SystemsMechanical Turk (MTurk) is an increasingly popular web service for paying people small rewards to do human computation tasks. Current uses of MTurk typically post independent parallel tasks. This research explores an alternative iterative paradigm, in ...
TurKit: tools for iterative tasks on mechanical Turk
HCOMP '09: Proceedings of the ACM SIGKDD Workshop on Human ComputationMechanical Turk (MTurk) is an increasingly popular web service for paying people small rewards to do human computation tasks. Current uses of MTurk typically post independent parallel tasks. We are exploring an alternative iterative paradigm, in which ...
BlueSky: Crowd-Powered Uniform Sampling of Idea Spaces
C&C '17: Proceedings of the 2017 ACM SIGCHI Conference on Creativity and CognitionDesign contests and group creativity support systems have demonstrated the value of crowds for producing a solution to a design or engineering problem. However, when the goal is not one idea, but many ideas, an uncoordinated crowd effort would likely ...
Comments