Ethical Application of Generative Artificial Intelligence in Medicine.
Autor: | Hasan SS; Rush Medical College, Chicago, IL, USA., Fury MS; Baton Rouge Orthopaedic Clinic, Baton Rouge, LA, USA., Woo JJ; Brown University/The Warren Alpert School of Brown University, Providence, RI, USA., Kunze KN; Hospital for Special Surgery, New York, NY, USA., Ramkumar PN; Commons Clinic, Long Beach, CA, USA. Electronic address: premramkumar@gmail.com. |
---|---|
Jazyk: | angličtina |
Zdroj: | Arthroscopy : the journal of arthroscopic & related surgery : official publication of the Arthroscopy Association of North America and the International Arthroscopy Association [Arthroscopy] 2024 Dec 15. Date of Electronic Publication: 2024 Dec 15. |
DOI: | 10.1016/j.arthro.2024.12.011 |
Abstrakt: | Generative artificial intelligence(AI) may revolutionize healthcare, providing solutions that range from enhancing diagnostic accuracy to personalizing treatment plans. However, its rapid and largely unregulated integration into medicine raises ethical concerns related to data integrity, patient safety, and appropriate oversight. One of the primary ethical challenges lies in generative AI's potential to produce misleading or fabricated information, posing risks of misdiagnosis or inappropriate treatment recommendations, which underscore the necessity for robust physician oversight. Transparency also remains a critical concern as the closed-source nature of many large-language models(LLMs) prevents both patients and healthcare providers from understanding the reasoning behind AI-generated outputs, potentially eroding trust. The lack of regulatory approval for AI as a medical device, combined with concerns around the security of patient-derived data and AI-generated synthetic data, further complicates its safe integration into clinical workflows. Furthermore, synthetic datasets generated by AI, while valuable for augmenting research in areas with scarce data, complicate questions of data ownership, patient consent, and scientific validity. Additionally, generative AI's ability to streamline administrative tasks risks depersonalizing care, further distancing providers from patients. These challenges compound the deeper issues plaguing the healthcare system, including the emphasis of volume and speed - over value and expertise. The utilization of generative AI in medicine brings about mass scaling of synthetic information, thereby necessitating careful adoption to protect patient care and medical advancement. Given these considerations, generative AI applications warrant regulatory and critical scrutiny . Key starting points include establishing strict standards for data security and transparency, implementing oversight akin to Institutional Review Boards(IRBs) to govern data usage, and developing interdisciplinary guidelines that involve developers, clinicians, and ethicists. By addressing these concerns, we can better align generative AI adoption with the core foundations of humanistic healthcare - preserving patient safety, autonomy, and trust while harnessing AI's transformative potential. LEVEL OF EVIDENCE: Level V: Expert Opinion. (Copyright © 2024. Published by Elsevier Inc.) |
Databáze: | MEDLINE |
Externí odkaz: |