Zobrazeno 1 - 10
of 40 195
pro vyhledávání: '"political bias"'
Autor:
Martinez, Manuel Nunez, Schmer-Galunder, Sonja, Liu, Zoey, Youm, Sangpil, Jayaweera, Chathuri, Dorr, Bonnie J.
The unchecked spread of digital information, combined with increasing political polarization and the tendency of individuals to isolate themselves from opposing political viewpoints, has driven researchers to develop systems for automatically detecti
Externí odkaz:
http://arxiv.org/abs/2411.04328
Bias assessment of news sources is paramount for professionals, organizations, and researchers who rely on truthful evidence for information gathering and reporting. While certain bias indicators are discernible from content analysis, descriptors lik
Externí odkaz:
http://arxiv.org/abs/2410.17655
Autor:
Fulay, Suyash, Brannon, William, Mohanty, Shrestha, Overney, Cassandra, Poole-Dayan, Elinor, Roy, Deb, Kabbara, Jad
Language model alignment research often attempts to ensure that models are not only helpful and harmless, but also truthful and unbiased. However, optimizing these objectives simultaneously can obscure how improving one aspect might impact the others
Externí odkaz:
http://arxiv.org/abs/2409.05283
Autor:
Hernandes, Raphael, Corsi, Giulio
This research investigates whether OpenAI's GPT-4, a state-of-the-art large language model, can accurately classify the political bias of news sources based solely on their URLs. Given the subjective nature of political labels, third-party bias ratin
Externí odkaz:
http://arxiv.org/abs/2407.14344
This paper investigates the presence of political bias in emotion inference models used for sentiment analysis (SA) in social science research. Machine learning models often reflect biases in their training data, impacting the validity of their outco
Externí odkaz:
http://arxiv.org/abs/2407.13891
LLMs are changing the way humans create and interact with content, potentially affecting citizens' political opinions and voting decisions. As LLMs increasingly shape our digital information ecosystems, auditing to evaluate biases, sycophancy, or ste
Externí odkaz:
http://arxiv.org/abs/2407.18008
Large Language Models (LLMs) have demonstrated remarkable capabilities in executing tasks based on natural language queries. However, these models, trained on curated datasets, inherently embody biases ranging from racial to national and gender biase
Externí odkaz:
http://arxiv.org/abs/2407.17688
The assessment of bias within Large Language Models (LLMs) has emerged as a critical concern in the contemporary discourse surrounding Artificial Intelligence (AI) in the context of their potential impact on societal dynamics. Recognizing and conside
Externí odkaz:
http://arxiv.org/abs/2405.13041
We propose to measure political bias in LLMs by analyzing both the content and style of their generated content regarding political issues. Existing benchmarks and measures focus on gender and racial biases. However, political bias exists in LLMs and
Externí odkaz:
http://arxiv.org/abs/2403.18932