[ChatGPT: aid to medical ethics decision making?]

Autor: Schmidt KW; Zentrum für Ethik in der Medizin, Agaplesion Markus Krankenhaus, Wilhelm-Epstein-Str. 4, 60431, Frankfurt a. M., Deutschland. kurt.schmidt@ekhn.de., Lechner F; Institut für Künstliche Intelligenz, Universitätsklinikum Gießen und Marburg, Marburg, Deutschland.
Jazyk: němčina
Zdroj: Innere Medizin (Heidelberg, Germany) [Inn Med (Heidelb)] 2023 Nov; Vol. 64 (11), pp. 1065-1071. Date of Electronic Publication: 2023 Oct 11.
DOI: 10.1007/s00108-023-01601-2
Abstrakt: Background: Physicians have to make countless decisions every day. The medical, ethical and legal aspects are often intertwined and subject to change over time. Involving an ethics committee or arranging an ethical consultation are examples of potential aids to decision making. Whether and how artificial intelligence (AI) and the large language model (LLM) of the company OpenAI (San Francisco, CA, USA), known under the name ChatGPT, can also help and support ethical decision making is increasingly becoming a matter of controversial debate.
Material and Methods: Based on a case example, in which a female physician is confronted with ethical and legal issues and presents these to ChatGPT to come up with answers, the first indications of the strengths and weaknesses are ascertained.
Conclusion: Due to the rapid technical development and access to ever increasing quantities of data, the utilization should be closely observed and evaluated.
(© 2023. The Author(s), under exclusive licence to Springer Medizin Verlag GmbH, ein Teil von Springer Nature.)
Databáze: MEDLINE