A proof of concept reinforcement learning based tool for non parametric population pharmacokinetics workflow optimization.
Autor: | Otalvaro JD; Laboratory of Applied Pharmacokinetics and Bioinformatics, Department of Infectious Diseases, Children's Hospital Los Angeles, Los Angeles, CA, USA.; Bioinstrumentation and Clinical Engineering Research Group, Engineering Department, University of Antioquia, Medellín, Colombia.; Laboratory of Integrated and Specialized Medicine, Medical School, University of Antioquia, Medellín, Colombia., Yamada WM; Laboratory of Applied Pharmacokinetics and Bioinformatics, Department of Infectious Diseases, Children's Hospital Los Angeles, Los Angeles, CA, USA., Hernandez AM; Bioinstrumentation and Clinical Engineering Research Group, Engineering Department, University of Antioquia, Medellín, Colombia., Zuluaga AF; Laboratory of Integrated and Specialized Medicine, Medical School, University of Antioquia, Medellín, Colombia., Chen R; Laboratory of Applied Pharmacokinetics and Bioinformatics, Department of Infectious Diseases, Children's Hospital Los Angeles, Los Angeles, CA, USA., Neely MN; Laboratory of Applied Pharmacokinetics and Bioinformatics, Department of Infectious Diseases, Children's Hospital Los Angeles, Los Angeles, CA, USA. mneely@chla.usc.edu. |
---|---|
Jazyk: | angličtina |
Zdroj: | Journal of pharmacokinetics and pharmacodynamics [J Pharmacokinet Pharmacodyn] 2023 Feb; Vol. 50 (1), pp. 33-43. Date of Electronic Publication: 2022 Dec 07. |
DOI: | 10.1007/s10928-022-09829-5 |
Abstrakt: | The building of population pharmacokinetic models can be described as an iterative process in which given a model and a dataset, the pharmacometrician introduces some changes to the model specification, then perform an evaluation and based on the predictions obtained performs further optimization. This process (perform an action, witness a result, optimize your knowledge) is a perfect scenario for the implementation of Reinforcement Learning algorithms. In this paper we present the conceptual background and a implementation of one of those algorithms aiming to show pharmacometricians how to automate (to a certain point) the iterative model building process.We present the selected discretization for the action and the state space. SARSA (State-Action-Reward-State-Action) was selected as the RL algorithm to use, configured with a window of 1000 episodes with and a limit of 30 actions per episode. SARSA was configured to control an interface to the Non-Parametric Optimal Design algorithm, that was actually performing the parameter optimization.The Reinforcement Learning (RL) based agent managed to obtain the same likelihood and number of support points, with a distribution similar to the reported in the original paper. The total amount of time used by the train the agent was 5.5 h although we think this time can be further improved. It is possible to automatically find the structural model that maximizes the final likelihood for an specific pharmacokinetic dataset by using RL algorithm. The framework provided could allow the integration of even more actions i.e: add/remove covariates, non-linear compartments or the execution of secondary analysis. Many limitations were found while performing this study but we hope to address them all in future studies. (© 2022. The Author(s).) |
Databáze: | MEDLINE |
Externí odkaz: |