Narrowing the gap: expected versus deployment performance.
Autor: | Zhou AX; Department of Anesthesiology and Critical Care Medicine, Children's Hospital Los Angeles, Los Angeles, California, USA.; Laura P. and Leland K. Whittier Virtual Pediatric Intensive Care Unit, Children's Hospital Los Angeles, Los Angeles, California, USA., Aczon MD; Department of Anesthesiology and Critical Care Medicine, Children's Hospital Los Angeles, Los Angeles, California, USA.; Laura P. and Leland K. Whittier Virtual Pediatric Intensive Care Unit, Children's Hospital Los Angeles, Los Angeles, California, USA., Laksana E; Department of Anesthesiology and Critical Care Medicine, Children's Hospital Los Angeles, Los Angeles, California, USA.; Laura P. and Leland K. Whittier Virtual Pediatric Intensive Care Unit, Children's Hospital Los Angeles, Los Angeles, California, USA., Ledbetter DR; Advanced Analytics for Healthcare, KPMG International Limited, Dallas, Texas, USA., Wetzel RC; Department of Anesthesiology and Critical Care Medicine, Children's Hospital Los Angeles, Los Angeles, California, USA.; Laura P. and Leland K. Whittier Virtual Pediatric Intensive Care Unit, Children's Hospital Los Angeles, Los Angeles, California, USA.; Department of Pediatrics and Anesthesiology, University of Southern California Keck School of Medicine, Los Angeles, California, USA. |
---|---|
Jazyk: | angličtina |
Zdroj: | Journal of the American Medical Informatics Association : JAMIA [J Am Med Inform Assoc] 2023 Aug 18; Vol. 30 (9), pp. 1474-1485. |
DOI: | 10.1093/jamia/ocad100 |
Abstrakt: | Objectives: Successful model development requires both an accurate a priori understanding of future performance and high performance on deployment. Optimistic estimations of model performance that are unrealized in real-world clinical settings can contribute to nonuse of predictive models. This study used 2 tasks, predicting ICU mortality and Bi-Level Positive Airway Pressure failure, to quantify: (1) how well internal test performances derived from different methods of partitioning data into development and test sets estimate future deployment performance of Recurrent Neural Network models and (2) the effects of including older data in the training set on models' performance. Materials and Methods: The cohort consisted of patients admitted between 2010 and 2020 to the Pediatric Intensive Care Unit of a large quaternary children's hospital. 2010-2018 data were partitioned into different development and test sets to measure internal test performance. Deployable models were trained on 2010-2018 data and assessed on 2019-2020 data, which was conceptualized to represent a real-world deployment scenario. Optimism, defined as the overestimation of the deployed performance by internal test performance, was measured. Performances of deployable models were also compared with each other to quantify the effect of including older data during training. Results, Discussion, and Conclusion: Longitudinal partitioning methods, where models are tested on newer data than the development set, yielded the least optimism. Including older years in the training dataset did not degrade deployable model performance. Using all available data for model development fully leveraged longitudinal partitioning by measuring year-to-year performance. (© The Author(s) 2023. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For permissions, please email: journals.permissions@oup.com.) |
Databáze: | MEDLINE |
Externí odkaz: |