Zobrazeno 1 - 10
of 44
pro vyhledávání: '"Davani, Aida Mostafazadeh"'
While human annotations play a crucial role in language technologies, annotator subjectivity has long been overlooked in data collection. Recent studies that have critically examined this issue are often situated in the Western context, and solely do
Externí odkaz:
http://arxiv.org/abs/2404.10857
Generative language models are transforming our digital ecosystem, but they often inherit societal biases, for instance stereotypes associating certain attributes with specific identity groups. While whether and how these biases are mitigated may dep
Externí odkaz:
http://arxiv.org/abs/2404.05866
Autor:
Prabhakaran, Vinodkumar, Homan, Christopher, Aroyo, Lora, Davani, Aida Mostafazadeh, Parrish, Alicia, Taylor, Alex, Díaz, Mark, Wang, Ding, Serapio-García, Gregory
Publikováno v:
2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Human annotation plays a core role in machine learning -- annotations for supervised models, safety guardrails for generative models, and human feedback for reinforcement learning, to cite a few avenues. However, the fact that many of these human ann
Externí odkaz:
http://arxiv.org/abs/2311.05074
Autor:
Trager, Jackson, Ziabari, Alireza S., Davani, Aida Mostafazadeh, Golazizian, Preni, Karimi-Malekabadi, Farzan, Omrani, Ali, Li, Zhihe, Kennedy, Brendan, Reimer, Nils Karl, Reyes, Melissa, Cheng, Kelsey, Wei, Mellow, Merrifield, Christina, Khosravi, Arta, Alvarez, Evans, Dehghani, Morteza
Moral framing and sentiment can affect a variety of online and offline behaviors, including donation, pro-environmental action, political engagement, and even participation in violent protests. Various computational methods in Natural Language Proces
Externí odkaz:
http://arxiv.org/abs/2208.05545
Social stereotypes negatively impact individuals' judgements about different groups and may have a critical role in how people understand language directed toward minority social groups. Here, we assess the role of social stereotypes in the automated
Externí odkaz:
http://arxiv.org/abs/2110.14839
Majority voting and averaging are common approaches employed to resolve annotator disagreements and derive single ground truth labels from multiple annotations. However, annotators may systematically disagree with one another, often reflecting their
Externí odkaz:
http://arxiv.org/abs/2110.05719
A common practice in building NLP datasets, especially using crowd-sourced annotations, involves obtaining multiple annotator judgements on the same data instances, which are then flattened to produce a single "ground truth" label or score, through m
Externí odkaz:
http://arxiv.org/abs/2110.05699
Autor:
Davani, Aida Mostafazadeh, Omrani, Ali, Kennedy, Brendan, Atari, Mohammad, Ren, Xiang, Dehghani, Morteza
Bias mitigation approaches reduce models' dependence on sensitive features of data, such as social group tokens (SGTs), resulting in equal predictions across the sensitive features. In hate speech detection, however, equalizing model predictions may
Externí odkaz:
http://arxiv.org/abs/2108.01721
Autor:
Jin, Xisen, Barbieri, Francesco, Kennedy, Brendan, Davani, Aida Mostafazadeh, Neves, Leonardo, Ren, Xiang
Fine-tuned language models have been shown to exhibit biases against protected groups in a host of modeling tasks such as text classification and coreference resolution. Previous works focus on detecting these biases, reducing bias in data representa
Externí odkaz:
http://arxiv.org/abs/2010.12864
Autor:
Davani, Aida Mostafazadeh, Omrani, Ali, Kennedy, Brendan, Atari, Mohammad, Ren, Xiang, Dehghani, Morteza
Approaches for mitigating bias in supervised models are designed to reduce models' dependence on specific sensitive features of the input data, e.g., mentioned social groups. However, in the case of hate speech detection, it is not always desirable t
Externí odkaz:
http://arxiv.org/abs/2010.12779