Zobrazeno 1 - 10
of 15
pro vyhledávání: '"Hugh Chen"'
Publikováno v:
Communications Medicine, Vol 2, Iss 1, Pp 1-15 (2022)
Qui et al. present a new approach, IMPACT, that uses explainable artificial intelligence to analyze all-cause mortality. IMPACT provides insights into the individualized mortality risk scores, while maintaining high model accuracy and the expressive
Externí odkaz:
https://doaj.org/article/118aac8e5a2840bfad712a43a432edea
Publikováno v:
Nature Communications, Vol 13, Iss 1, Pp 1-15 (2022)
Series of machine learning models, relevant for tasks in biology, medicine, and finance, usually involve complex feature attribution techniques. The authors introduce a tractable method to compute local feature attributions for a series of machine le
Externí odkaz:
https://doaj.org/article/096824d3b2b0426984b418a29c31e4ad
Publikováno v:
npj Digital Medicine, Vol 4, Iss 1, Pp 1-13 (2021)
Abstract Hundreds of millions of surgical procedures take place annually across the world, which generate a prevalent type of electronic health record (EHR) data comprising time series physiological signals. Here, we present a transferable embedding
Externí odkaz:
https://doaj.org/article/b2f5f714657446c482b8c2b572e198c8
Autor:
Joseph D. Janizek, Ayse B. Dincer, Safiye Celik, Hugh Chen, William Chen, Kamila Naxerova, Su-In Lee
Publikováno v:
Nature Biomedical Engineering.
BackgroundAn individual’s biological age is a measurement of health status and provides a mechanistic understanding of aging. Age clocks estimate a biological age of an individual based on their various features. Existing clocks have key limitation
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::b9ab54f03b0b107e7b1be4b78397f6a0
https://doi.org/10.1101/2022.10.05.22280735
https://doi.org/10.1101/2022.10.05.22280735
Autor:
Alex J. DeGrave, Nisha Bansal, Jordan M. Prutkin, Jonathan Himmelfarb, Ronit Katz, Gabriel G. Erion, Bala G. Nair, Su-In Lee, Hugh Chen, Scott M. Lundberg
Publikováno v:
Nat Mach Intell
Tree-based machine learning models such as random forests, decision trees and gradient boosted trees are popular nonlinear predictive models, yet comparatively little attention has been paid to explaining their predictions. Here we improve the interp
Feature attributions based on the Shapley value are popular for explaining machine learning models; however, their estimation is complex from both a theoretical and computational standpoint. We disentangle this complexity into two factors: (1)~the ap
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::5eacd5cb2204b8e0671c7c014a88cff3
Autor:
Safiye Celik, William Chen, Kamila Naxerova, Su-In Lee, Joseph D. Janizek, Hugh Chen, Ayse B. Dincer
Complex machine learning models are poised to revolutionize the treatment of diseases like acute myeloid leukemia (AML) by helping physicians choose optimal combinations of anti-cancer drugs based on molecular features. While accurate predictions are
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::4e89bb60a6ebc0b7e809582decbb869b
https://doi.org/10.1101/2021.10.06.463409
https://doi.org/10.1101/2021.10.06.463409
Publikováno v:
Nature communications. 13(1)
Local feature attribution methods are increasingly used to explain complex machine learning models. However, current methods are limited because they are extremely expensive to compute or are not capable of explaining a distributed series of models w
Publikováno v:
Explainable AI in Healthcare and Medicine ISBN: 9783030533519
In healthcare, making the best possible predictions with complex models (e.g., neural networks, ensembles/stacks of different models) can impact patient welfare. In order to make these complex models explainable, we present DeepSHAP for mixed model t
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::08449587a6f088765e143e0784d6ad2e
https://doi.org/10.1007/978-3-030-53352-6_24
https://doi.org/10.1007/978-3-030-53352-6_24