Perils and opportunities in using large language models in psychological research.

Autor: Abdurahman S; Department of Psychology, University of Southern California, Los Angeles, CA 90089, USA.; Brain and Creativity Institute, University of Southern California, Los Angeles, CA 90089, USA., Atari M; Department of Human Evolutionary Biology, Harvard University, Cambridge, MA 02138, USA.; Department of Psychological and Brain Sciences, University of Massachusetts Amherst, Amherst, MA 01003, USA., Karimi-Malekabadi F; Department of Psychology, University of Southern California, Los Angeles, CA 90089, USA.; Brain and Creativity Institute, University of Southern California, Los Angeles, CA 90089, USA., Xue MJ; Department of Human Evolutionary Biology, Harvard University, Cambridge, MA 02138, USA., Trager J; Department of Psychology, University of Southern California, Los Angeles, CA 90089, USA.; Brain and Creativity Institute, University of Southern California, Los Angeles, CA 90089, USA., Park PS; Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA., Golazizian P; Brain and Creativity Institute, University of Southern California, Los Angeles, CA 90089, USA.; Department of Computer Science, University of Southern California, Los Angeles, CA 90089, USA., Omrani A; Brain and Creativity Institute, University of Southern California, Los Angeles, CA 90089, USA.; Department of Computer Science, University of Southern California, Los Angeles, CA 90089, USA., Dehghani M; Department of Psychology, University of Southern California, Los Angeles, CA 90089, USA.; Brain and Creativity Institute, University of Southern California, Los Angeles, CA 90089, USA.; Department of Computer Science, University of Southern California, Los Angeles, CA 90089, USA.
Jazyk: angličtina
Zdroj: PNAS nexus [PNAS Nexus] 2024 Jul 16; Vol. 3 (7), pp. pgae245. Date of Electronic Publication: 2024 Jul 16 (Print Publication: 2024).
DOI: 10.1093/pnasnexus/pgae245
Abstrakt: The emergence of large language models (LLMs) has sparked considerable interest in their potential application in psychological research, mainly as a model of the human psyche or as a general text-analysis tool. However, the trend of using LLMs without sufficient attention to their limitations and risks, which we rhetorically refer to as "GPTology", can be detrimental given the easy access to models such as ChatGPT. Beyond existing general guidelines, we investigate the current limitations, ethical implications, and potential of LLMs specifically for psychological research, and show their concrete impact in various empirical studies. Our results highlight the importance of recognizing global psychological diversity, cautioning against treating LLMs (especially in zero-shot settings) as universal solutions for text analysis, and developing transparent, open methods to address LLMs' opaque nature for reliable, reproducible, and robust inference from AI-generated data. Acknowledging LLMs' utility for task automation, such as text annotation, or to expand our understanding of human psychology, we argue for diversifying human samples and expanding psychology's methodological toolbox to promote an inclusive, generalizable science, countering homogenization, and over-reliance on LLMs.
(© The Author(s) 2024. Published by Oxford University Press on behalf of National Academy of Sciences.)
Databáze: MEDLINE