Autor: |
Mincu, Diana, Loreaux, Eric, Hou, Shaobo, Baur, Sebastien, Protsyuk, Ivan, Seneviratne, Martin G, Mottram, Anne, Tomasev, Nenad, Karthikesanlingam, Alan, Schrouff, Jessica |
Rok vydání: |
2020 |
Předmět: |
|
Zdroj: |
CHIL '21: Proceedings of the Conference on Health, Inference, and Learning, 2021 |
Druh dokumentu: |
Working Paper |
DOI: |
10.1145/3450439.3451858 |
Popis: |
Recurrent Neural Networks (RNNs) are often used for sequential modeling of adverse outcomes in electronic health records (EHRs) due to their ability to encode past clinical states. These deep, recurrent architectures have displayed increased performance compared to other modeling approaches in a number of tasks, fueling the interest in deploying deep models in clinical settings. One of the key elements in ensuring safe model deployment and building user trust is model explainability. Testing with Concept Activation Vectors (TCAV) has recently been introduced as a way of providing human-understandable explanations by comparing high-level concepts to the network's gradients. While the technique has shown promising results in real-world imaging applications, it has not been applied to structured temporal inputs. To enable an application of TCAV to sequential predictions in the EHR, we propose an extension of the method to time series data. We evaluate the proposed approach on an open EHR benchmark from the intensive care unit, as well as synthetic data where we are able to better isolate individual effects. |
Databáze: |
arXiv |
Externí odkaz: |
|