Simulated misuse of large language models and clinical credit systems.

Autor: Anibal JT; Center for Interventional Oncology, NIH Clinical Center, National Institutes of Health (NIH), Bethesda, MD, USA. anibal.james@nih.gov., Huth HB; Center for Interventional Oncology, NIH Clinical Center, National Institutes of Health (NIH), Bethesda, MD, USA., Gunkel J; Department of Bioethics, National Institutes of Health (NIH), Bethesda, MD, USA., Gregurick SK; Office of the Director, National Institutes of Health (NIH), Bethesda, MD, USA., Wood BJ; Center for Interventional Oncology, NIH Clinical Center, National Institutes of Health (NIH), Bethesda, MD, USA.
Jazyk: angličtina
Zdroj: NPJ digital medicine [NPJ Digit Med] 2024 Nov 11; Vol. 7 (1), pp. 317. Date of Electronic Publication: 2024 Nov 11.
DOI: 10.1038/s41746-024-01306-2
Abstrakt: In the future, large language models (LLMs) may enhance the delivery of healthcare, but there are risks of misuse. These methods may be trained to allocate resources via unjust criteria involving multimodal data - financial transactions, internet activity, social behaviors, and healthcare information. This study shows that LLMs may be biased in favor of collective/systemic benefit over the protection of individual rights and could facilitate AI-driven social credit systems.
Competing Interests: Competing interests The authors declare no competing interests.
(© 2024. The Author(s).)
Databáze: MEDLINE