Challenging the Validity of Personality Tests for Large Language Models

Autor: Sühr, Tom, Dorner, Florian E., Samadi, Samira, Kelava, Augustin
Rok vydání: 2023
Předmět:
Druh dokumentu: Working Paper
Popis: With large language models (LLMs) like GPT-4 appearing to behave increasingly human-like in text-based interactions, it has become popular to attempt to evaluate personality traits of LLMs using questionnaires originally developed for humans. While reusing measures is a resource-efficient way to evaluate LLMs, careful adaptations are usually required to ensure that assessment results are valid even across human subpopulations. In this work, we provide evidence that LLMs' responses to personality tests systematically deviate from human responses, implying that the results of these tests cannot be interpreted in the same way. Concretely, reverse-coded items ("I am introverted" vs. "I am extraverted") are often both answered affirmatively. Furthermore, variation across prompts designed to "steer" LLMs to simulate particular personality types does not follow the clear separation into five independent personality factors from human samples. In light of these results, we believe that it is important to investigate tests' validity for LLMs before drawing strong conclusions about potentially ill-defined concepts like LLMs' "personality".
Comment: A less extensive and shorter version of this work has been accepted at Socially Responsible Language Modelling Research (SoLaR) 2023 Workshop at NeurIPS 2023
Databáze: arXiv