Artificial intelligence in clinical pharmacology: A case study and scoping review of large language models and bioweapon potential.
Autor: | Rubinic I; University of Rijeka School of Medicine, Rijeka, Croatia.; Clinical Hospital Centre Rijeka, Rijeka, Croatia., Kurtov M; Clinical Hospital Sveti Duh, Zagreb, Croatia., Rubinic I; School of Engineering, University of Rijeka, Rijeka, Croatia., Likic R; University of Zagreb School of Medicine, Zagreb, Croatia.; Clinical Hospital Centre Zagreb, Zagreb, Croatia., Dargan PI; Faculty of Life Sciences and Medicine, King's College London, London, UK.; Clinical Toxicology, Guy's and St Thomas' NHS Foundation Trust, London, UK., Wood DM; Faculty of Life Sciences and Medicine, King's College London, London, UK.; Clinical Toxicology, Guy's and St Thomas' NHS Foundation Trust, London, UK. |
---|---|
Jazyk: | angličtina |
Zdroj: | British journal of clinical pharmacology [Br J Clin Pharmacol] 2024 Mar; Vol. 90 (3), pp. 620-628. Date of Electronic Publication: 2023 Sep 24. |
DOI: | 10.1111/bcp.15899 |
Abstrakt: | This paper aims to explore the possibility of employing large language models (LLMs) - a type of artificial intelligence (AI) - in clinical pharmacology, with a focus on its possible misuse in bioweapon development. Additionally, ethical considerations, legislation and potential risk reduction measures are analysed. The existing literature is reviewed to investigate the potential misuse of AI and LLMs in bioweapon creation. The search includes articles from PubMed, Scopus and Web of Science Core Collection that were identified using a specific protocol. To explore the regulatory landscape, the OECD.ai platform was used. The review highlights the dual-use vulnerability of AI and LLMs, with a focus on bioweapon development. Subsequently, a case study is used to illustrate the potential of AI manipulation resulting in harmful substance synthesis. Existing regulations inadequately address the ethical concerns tied to AI and LLMs. Mitigation measures are proposed, including technical solutions (explainable AI), establishing ethical guidelines through collaborative efforts, and implementing policy changes to create a comprehensive regulatory framework. The integration of AI and LLMs into clinical pharmacology presents invaluable opportunities, while also introducing significant ethical and safety considerations. Addressing the dual-use nature of AI requires robust regulations, as well as adopting a strategic approach grounded in technical solutions and ethical values following the principles of transparency, accountability and safety. Additionally, AI's potential role in developing countermeasures against novel hazardous substances is underscored. By adopting a proactive approach, the potential benefits of AI and LLMs can be fully harnessed while minimizing the associated risks. (© 2023 British Pharmacological Society.) |
Databáze: | MEDLINE |
Externí odkaz: |