Shadows of wisdom: Classifying meta-cognitive and morally grounded narrative content via large language models.

Autor: Stavropoulos, Alexander, Crone, Damien L., Grossmann, Igor
Předmět:
Zdroj: Behavior Research Methods; Oct2024, Vol. 56 Issue 7, p7632-7646, 15p
Abstrakt: We investigated large language models' (LLMs) efficacy in classifying complex psychological constructs like intellectual humility, perspective-taking, open-mindedness, and search for a compromise in narratives of 347 Canadian and American adults reflecting on a workplace conflict. Using state-of-the-art models like GPT-4 across few-shot and zero-shot paradigms and RoB-ELoC (RoBERTa -fine-tuned-on-Emotion-with-Logistic-Regression-Classifier), we compared their performance with expert human coders. Results showed robust classification by LLMs, with over 80% agreement and F1 scores above 0.85, and high human-model reliability (Cohen's κ Md across top models =.80). RoB-ELoC and few-shot GPT-4 were standout classifiers, although somewhat less effective in categorizing intellectual humility. We offer example workflows for easy integration into research. Our proof-of-concept findings indicate the viability of both open-source and commercial LLMs in automating the coding of complex constructs, potentially transforming social science research. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index