Crowd-Calibrator: Can Annotator Disagreement Inform Calibration in Subjective Tasks?

Autor: Khurana, Urja, Nalisnick, Eric, Fokkens, Antske, Swayamdipta, Swabha
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Subjective tasks in NLP have been mostly relegated to objective standards, where the gold label is decided by taking the majority vote. This obfuscates annotator disagreement and the inherent uncertainty of the label. We argue that subjectivity should factor into model decisions and play a direct role via calibration under a selective prediction setting. Specifically, instead of calibrating confidence purely from the model's perspective, we calibrate models for subjective tasks based on crowd worker agreement. Our method, Crowd-Calibrator, models the distance between the distribution of crowd worker labels and the model's own distribution over labels to inform whether the model should abstain from a decision. On two highly subjective tasks, hate speech detection and natural language inference, our experiments show Crowd-Calibrator either outperforms or achieves competitive performance with existing selective prediction baselines. Our findings highlight the value of bringing human decision-making into model predictions.
Comment: Accepted at COLM 2024
Databáze: arXiv