Citations and Trust in LLM Generated Responses
Autor: | Ding, Yifan, Facciani, Matthew, Poudel, Amrit, Joyce, Ellen, Aguinaga, Salvador, Veeramani, Balaji, Bhattacharya, Sanmitra, Weninger, Tim |
---|---|
Rok vydání: | 2025 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | Question answering systems are rapidly advancing, but their opaque nature may impact user trust. We explored trust through an anti-monitoring framework, where trust is predicted to be correlated with presence of citations and inversely related to checking citations. We tested this hypothesis with a live question-answering experiment that presented text responses generated using a commercial Chatbot along with varying citations (zero, one, or five), both relevant and random, and recorded if participants checked the citations and their self-reported trust in the generated responses. We found a significant increase in trust when citations were present, a result that held true even when the citations were random; we also found a significant decrease in trust when participants checked the citations. These results highlight the importance of citations in enhancing trust in AI-generated content. Comment: Accepted to AAAI 2025 |
Databáze: | arXiv |
Externí odkaz: |