Autor: |
Najafali, Daniel, Hinson, Chandler, Camacho, Justin M., Galbraith, Logan G., Tople, Tannon L., Eble, Danielle, Weinstein, Brielle, Schechter, Loren S., Dorafshar, Amir H., Morrison, Shane D. |
Předmět: |
|
Zdroj: |
European Journal of Plastic Surgery; Dec2023, Vol. 46 Issue 6, p1169-1176, 8p |
Abstrakt: |
Background: Artificial intelligence (AI) is evolving rapidly as are its uses in healthcare and scientific literature. There are concerns about whether AI like ChatGPT has implicit biases. This study explores ChatGPT's ability to reference evidence-based recommendations related to gender-affirming surgery (GAS). Methods: ChatGPT was prompted using open-ended questions on GAS as well as the World Professional Association for Transgender Health Standards of Care (WPATH SOC) for the Health of Transgender and Gender Diverse People, Version 8's statements of recommendations. Responses were analyzed based on agreement to and reference of WPATH SOC recommendations. Results: A total of 95 WPATH statements of recommendations were given to the chatbot. There were 70 (74%) agreements, 0 (0%) disagreements, and 25 (26%) neutral responses. WPATH was directly referenced in 12 (13%) responses. ChatGPT was successful in describing aspects of gender diversity, including the treatment of gender dysphoria. Conclusions: While often using neutral language, ChatGPT does intermittently reference WPATH and its evidence-based recommendations. As AI evolves, so can the spread of misinformation if it is not rooted in evidence-based recommendations. Furthermore, AI may serve as a viable tool for patient education on GAS. Level of evidence: Not gradable [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|