Abstrakt: |
This study presents a diagnostic framework for healthcare using explainable artificial intelligence and machine learning. Previous studies show that the ability of prediction models greatly depends on the relevance and independence of the feature set. Hence, various feature selection methods are presented in the literature. Moreover, the previous medical diagnostic models using neural networks offer accurate predictions, however, without generating explicit decision rules. A novel research framework with incremental feature selection and interpretable machine learning is proposed. First non-redundant and relevant features are selected. Further, the initial weights obtained during feature learning are fed to the interpretable neural network to obtain global and local explanations. This proposed research framework is demonstrated with an open-source medical dataset related to glioma, and the best-fit model is obtained. Moreover, an app for glioma grading is developed with the underlying predictive model to offer decision support to physicians and patients. [ABSTRACT FROM AUTHOR] |