Performance and biases of Large Language Models in public opinion simulation

Autor: Yao Qu, Jue Wang
Jazyk: angličtina
Rok vydání: 2024
Předmět:
Zdroj: Humanities & Social Sciences Communications, Vol 11, Iss 1, Pp 1-13 (2024)
Druh dokumentu: article
ISSN: 2662-9992
DOI: 10.1057/s41599-024-03609-x
Popis: Abstract The rise of Large Language Models (LLMs) like ChatGPT marks a pivotal advancement in artificial intelligence, reshaping the landscape of data analysis and processing. By simulating public opinion, ChatGPT shows promise in facilitating public policy development. However, challenges persist regarding its worldwide applicability and bias across demographics and themes. Our research employs socio-demographic data from the World Values Survey to evaluate ChatGPT’s performance in diverse contexts. Findings indicate significant performance disparities, especially when comparing countries. Models perform better in Western, English-speaking, and developed nations, notably the United States, in comparison to others. Disparities also manifest across demographic groups, showing biases related to gender, ethnicity, age, education, and social class. The study further uncovers thematic biases in political and environmental simulations. These results highlight the need to enhance LLMs’ representativeness and address biases, ensuring their equitable and effective integration into public opinion research alongside conventional methodologies.
Databáze: Directory of Open Access Journals