Axiomatic Analysis of Aggregation Methods for Collective Annotation

Autor: Kruger, J., Endriss, U., Fernández, R., Qing, C., Lomuscio, A., Scerri, P., Bazzan, A., Huhns, M.
Přispěvatelé: ILLC (FNWI), Logic and Computation (ILLC, FNWI/FGw), Brain and Cognition, Faculty of Science, Logic and Language (ILLC, FNWI/FGw)
Jazyk: angličtina
Rok vydání: 2014
Zdroj: AAMAS '14: proceedings of the 2014 International Conference on Autonomous Agents & Multiagent Systems: May 5-9, 2014, Paris, France, 1185-1192
STARTPAGE=1185;ENDPAGE=1192;TITLE=AAMAS '14: proceedings of the 2014 International Conference on Autonomous Agents & Multiagent Systems
Popis: Crowdsourcing is an important tool, e.g., in computational linguistics and computer vision, to efficiently label large amounts of data using nonexpert annotators. The individual annotations collected need to be aggregated into a single collective annotation. The hope is that the quality of this collective annotation will be comparable to that of a traditionally sourced expert annotation. In practice, most scientists working with crowdsourcing methods use simple majority voting to aggregate their data, although some have also used probabilistic models and treated aggregation as a problem of maximum likelihood estimation. The observation that the aggregation step in a collective annotation exercise may be considered a problem of social choice has only been made very recently. Following up on this observation, we show that the axiomatic method, as practiced in social choice theory, can make a contribution to this important domain and we develop an axiomatic framework for collective annotation, focusing amongst other things on the notion of an annotator's bias. We complement our theoretical study with a discussion of a crowdsourcing experiment using data from dialogue modelling in computational linguistics.
Databáze: OpenAIRE