Inductive thematic analysis of healthcare qualitative interviews using open-source large language models: How does it compare to traditional methods?

Autor: Mathis WS; Department of Psychiatry, Yale University School of Medicine, New Haven, CT, USA. Electronic address: Walter.Mathis@Yale.edu., Zhao S; Department of Psychiatry, Yale University School of Medicine, New Haven, CT, USA., Pratt N; Department of Psychiatry, Yale University School of Medicine, New Haven, CT, USA., Weleff J; Department of Psychiatry, Yale University School of Medicine, New Haven, CT, USA., De Paoli S; Division of Sociology, School of Business, Law and Social Sciences, Abertay University, Dundee, Scotland, United Kingdom.
Jazyk: angličtina
Zdroj: Computer methods and programs in biomedicine [Comput Methods Programs Biomed] 2024 Oct; Vol. 255, pp. 108356. Date of Electronic Publication: 2024 Jul 24.
DOI: 10.1016/j.cmpb.2024.108356
Abstrakt: Background: Large language models (LLMs) are generative artificial intelligence that have ignited much interest and discussion about their utility in clinical and research settings. Despite this interest there is sparse analysis of their use in qualitative thematic analysis comparing their current ability to that of human coding and analysis. In addition, there has been no published analysis of their use in real-world, protected health information.
Objective: Here we fill that gap in the literature by comparing an LLM to standard human thematic analysis in real-world, semi-structured interviews of both patients and clinicians within a psychiatric setting.
Methods: Using a 70 billion parameter open-source LLM running on local hardware and advanced prompt engineering techniques, we produced themes that summarized a full corpus of interviews in minutes. Subsequently we used three different evaluation methods for quantifying similarity between themes produced by the LLM and those produced by humans.
Results: These revealed similarities ranging from moderate to substantial (Jaccard similarity coefficients 0.44-0.69), which are promising preliminary results.
Conclusion: Our study demonstrates that open-source LLMs can effectively generate robust themes from qualitative data, achieving substantial similarity to human-generated themes. The validation of LLMs in thematic analysis, coupled with evaluation methodologies, highlights their potential to enhance and democratize qualitative research across diverse fields.
Competing Interests: Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this article.
(Copyright © 2024 Elsevier B.V. All rights reserved.)
Databáze: MEDLINE