Generation of Backward-Looking Complex Reflections for a Motivational Interviewing-Based Smoking Cessation Chatbot Using GPT-4: Algorithm Development and Validation.
Autor: | Kumar AT; Faculty of Applied Science & Engineering, University of Toronto, Toronto, ON, Canada., Wang C; Faculty of Applied Science & Engineering, University of Toronto, Toronto, ON, Canada., Dong A; Faculty of Applied Science & Engineering, University of Toronto, Toronto, ON, Canada., Rose J; Faculty of Applied Science & Engineering, University of Toronto, Toronto, ON, Canada.; The Edward S Rogers Sr Department of Electrical and Computer Engineering, University of Toronto, 10 King's College Road, Toronto, ON, M5S 3G4, Canada, +1 416-978-6992. |
---|---|
Jazyk: | angličtina |
Zdroj: | JMIR mental health [JMIR Ment Health] 2024 Sep 26; Vol. 11, pp. e53778. Date of Electronic Publication: 2024 Sep 26. |
DOI: | 10.2196/53778 |
Abstrakt: | Background: Motivational interviewing (MI) is a therapeutic technique that has been successful in helping smokers reduce smoking but has limited accessibility due to the high cost and low availability of clinicians. To address this, the MIBot project has sought to develop a chatbot that emulates an MI session with a client with the specific goal of moving an ambivalent smoker toward the direction of quitting. One key element of an MI conversation is reflective listening, where a therapist expresses their understanding of what the client has said by uttering a reflection that encourages the client to continue their thought process. Complex reflections link the client's responses to relevant ideas and facts to enhance this contemplation. Backward-looking complex reflections (BLCRs) link the client's most recent response to a relevant selection of the client's previous statements. Our current chatbot can generate complex reflections-but not BLCRs-using large language models (LLMs) such as GPT-2, which allows the generation of unique, human-like messages customized to client responses. Recent advancements in these models, such as the introduction of GPT-4, provide a novel way to generate complex text by feeding the models instructions and conversational history directly, making this a promising approach to generate BLCRs. Objective: This study aims to develop a method to generate BLCRs for an MI-based smoking cessation chatbot and to measure the method's effectiveness. Methods: LLMs such as GPT-4 can be stimulated to produce specific types of responses to their inputs by "asking" them with an English-based description of the desired output. These descriptions are called prompts, and the goal of writing a description that causes an LLM to generate the required output is termed prompt engineering. We evolved an instruction to prompt GPT-4 to generate a BLCR, given the portions of the transcript of the conversation up to the point where the reflection was needed. The approach was tested on 50 previously collected MIBot transcripts of conversations with smokers and was used to generate a total of 150 reflections. The quality of the reflections was rated on a 4-point scale by 3 independent raters to determine whether they met specific criteria for acceptability. Results: Of the 150 generated reflections, 132 (88%) met the level of acceptability. The remaining 18 (12%) had one or more flaws that made them inappropriate as BLCRs. The 3 raters had pairwise agreement on 80% to 88% of these scores. Conclusions: The method presented to generate BLCRs is good enough to be used as one source of reflections in an MI-style conversation but would need an automatic checker to eliminate the unacceptable ones. This work illustrates the power of the new LLMs to generate therapeutic client-specific responses under the command of a language-based specification. (© Ash Tanuj Kumar, Cindy Wang, Alec Dong, Jonathan Rose. Originally published in JMIR Mental Health (https://mental.jmir.org).) |
Databáze: | MEDLINE |
Externí odkaz: |