Approval Voting and Incentives in Crowdsourcing
Autor: | Dengyong Zhou, Nihar B. Shah |
---|---|
Rok vydání: | 2020 |
Předmět: |
Marketing
Statistics and Probability Economics and Econometrics Forcing (recursion theory) business.industry Computer science Interface (Java) 02 engineering and technology Crowdsourcing Data science Computational Mathematics Incentive Empirical research 020204 information systems 0202 electrical engineering electronic engineering information engineering Computer Science (miscellaneous) Key (cryptography) Approval voting 020201 artificial intelligence & image processing Set (psychology) business |
Zdroj: | ACM Transactions on Economics and Computation. 8:1-40 |
ISSN: | 2167-8383 2167-8375 |
DOI: | 10.1145/3396863 |
Popis: | The growing need for labeled training data has made crowdsourcing a vital tool for developing machine learning applications. Here, workers on a crowdsourcing platform are typically shown a list of unlabeled items, and for each of these items, are asked to choose a label from one of the provided options. The workers in crowdsourcing platforms are not experts, thereby making it essential to judiciously elicit the information known to the workers. With respect to this goal, there are two key shortcomings of current systems: (i) the incentives of the workers are not aligned with those of the requesters; and (ii) the interface does not allow workers to convey their knowledge accurately by forcing them to make a single choice among a set of options. In this article, we address these issues by introducing approval voting to utilize the expertise of workers who have partial knowledge of the true answer and coupling it with two strictly proper scoring rules. We additionally establish attractive properties of optimality and uniqueness of our scoring rules. We also conduct preliminary empirical studies on Amazon Mechanical Turk, and the results of these experiments validate our approach. |
Databáze: | OpenAIRE |
Externí odkaz: |