On the Reliability of Large Language Models to Misinformed and Demographically-Informed Prompts

Autor: Aremu, Toluwani, Akinwehinmi, Oluwakemi, Nwagu, Chukwuemeka, Ahmed, Syed Ishtiaque, Orji, Rita, Del Amo, Pedro Arnau, Saddik, Abdulmotaleb El
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: We investigate and observe the behaviour and performance of Large Language Model (LLM)-backed chatbots in addressing misinformed prompts and questions with demographic information within the domains of Climate Change and Mental Health. Through a combination of quantitative and qualitative methods, we assess the chatbots' ability to discern the veracity of statements, their adherence to facts, and the presence of bias or misinformation in their responses. Our quantitative analysis using True/False questions reveals that these chatbots can be relied on to give the right answers to these close-ended questions. However, the qualitative insights, gathered from domain experts, shows that there are still concerns regarding privacy, ethical implications, and the necessity for chatbots to direct users to professional services. We conclude that while these chatbots hold significant promise, their deployment in sensitive areas necessitates careful consideration, ethical oversight, and rigorous refinement to ensure they serve as a beneficial augmentation to human expertise rather than an autonomous solution.
Comment: Study conducted between August and December 2023. Under review at AAAI-AI Magazine. Submitted for archival purposes only
Databáze: arXiv