ItpCtrl-AI: End-to-end interpretable and controllable artificial intelligence by modeling radiologists' intentions.
Autor: | Pham TT; AICV Lab, Department of EECS, University of Arkansas, AR 72701, USA. Electronic address: tp030@uark.edu., Brecheisen J; AICV Lab, Department of EECS, University of Arkansas, AR 72701, USA. Electronic address: jmbreche@uark.edu., Wu CC; MD Anderson Cancer Center, Houston, TX 77079, USA. Electronic address: ccwu1@mdanderson.org., Nguyen H; Department of ECE, University of Houston, TX 77204, USA. Electronic address: hvnguy35@central.uh.edu., Deng Z; Department of CS, University of Houston, TX 77204, USA. Electronic address: zdeng4@entral.uh.edu., Adjeroh D; Department of CSEE, West Virginia University, WV 26506, USA. Electronic address: donald.adjeroh@mail.wvu.edu., Doretto G; Department of CSEE, West Virginia University, WV 26506, USA. Electronic address: gianfranco.doretto@mail.wvu.edu., Choudhary A; University of Arkansas for Medical Sciences, Little Rock, AR 72705, USA. Electronic address: achoudhary@uams.edu., Le N; AICV Lab, Department of EECS, University of Arkansas, AR 72701, USA. Electronic address: thile@uark.edu. |
---|---|
Jazyk: | angličtina |
Zdroj: | Artificial intelligence in medicine [Artif Intell Med] 2024 Dec 12; Vol. 160, pp. 103054. Date of Electronic Publication: 2024 Dec 12. |
DOI: | 10.1016/j.artmed.2024.103054 |
Abstrakt: | Using Deep Learning in computer-aided diagnosis systems has been of great interest due to its impressive performance in the general domain and medical domain. However, a notable challenge is the lack of explainability of many advanced models, which poses risks in critical applications such as diagnosing findings in CXR. To address this problem, we propose ItpCtrl-AI, a novel end-to-end interpretable and controllable framework that mirrors the decision-making process of the radiologist. By emulating the eye gaze patterns of radiologists, our framework initially determines the focal areas and assesses the significance of each pixel within those regions. As a result, the model generates an attention heatmap representing radiologists' attention, which is then used to extract attended visual information to diagnose the findings. By allowing the directional input, our framework is controllable by the user. Furthermore, by displaying the eye gaze heatmap which guides the diagnostic conclusion, the underlying rationale behind the model's decision is revealed, thereby making it interpretable. In addition to developing an interpretable and controllable framework, our work includes the creation of a dataset, named Diagnosed-Gaze++, which aligns medical findings with eye gaze data. Our extensive experimentation validates the effectiveness of our approach in generating accurate attention heatmaps and diagnoses. The experimental results show that our model not only accurately identifies medical findings but also precisely produces the eye gaze attention of radiologists. The dataset, models, and source code will be made publicly available upon acceptance. Competing Interests: Declaration of competing interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Trong-Thang Pham reports financial support was provided by National Science Foundation. Trong-Thang Pham reports financial support was provided by National Institutes of Health. Carol Wu reports financial support was provided by National Institutes of Health. Carol Wu reports financial support was provided by National Science Foundation. Jacob Brecheisen reports financial support was provided by National Science Foundation. Hien Nguyen reports financial support was provided by National Science Foundation. Hien Nguyen reports financial support was provided by National Institutes of Health. Zhigang Deng reports financial support was provided by National Science Foundation. Zhigang Deng reports financial support was provided by National Institutes of Health. Ngan Le reports financial support was provided by National Science Foundation. Ngan Le reports financial support was provided by National Institutes of Health. Donald Adjeroh reports financial support was provided by National Science Foundation. Gianfranco Doretto reports financial support was provided by National Science Foundation. If there are other authors, they declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. (Copyright © 2024 Elsevier B.V. All rights reserved.) |
Databáze: | MEDLINE |
Externí odkaz: |