JCSE, vol. 18, no. 1, pp.19-28, 2024
DOI: http://dx.doi.org/10.5626/JCSE.2024.18.1.00
Improving Interpretability of Deep Neural Networks in Medical Diagnosis by Investigating the Individual Units
Ho Kyung Shin and Woo-Jeoung Nam
School of Computer Science and Engineering, Kyungpook National University, Daegu, Korea
Abstract: Interpretability has emerged as an obstacle to the adoption of deep neural networks (DNNs) in particular domains, which has led to increasing interest in addressing transparency issues to ensure that DNNs can fulfill their impressive potential. In the current paper, we demonstrate the efficiency of various attribution techniques to explain the diagnostic decision of DNNs by visualizing the predicted suspicious region in the image. By utilizing the characteristics of objectness that DNNs have learned, fully decomposing the network prediction enables precise visualization of the targeted lesion. To verify our work, we conduct experiments on chest X-ray diagnosis using publicly accessible datasets. As an intuitive assessment metric for explanations, we present the performance of the intersection of union between visual explanation and the bounding box of lesions. The experimental results show that recently proposed attribution methods can visualize more specific localizations for diagnostic decisions compared to the traditionally used class activation mapping. We also analyze the inconsistency of intentions between humans and DNNs, which is easily obscured by high performance. Visualizing the relevant factors makes it possible to confirm that the criterion for decision is consistent with the training strategy. Our analysis of unmasking machine intelligence demonstrates the need for explainability in medical diagnostic
decision-making.
Keyword:
Deep learning; Explainable computer-aided diagnosis; Explainable AI; Visual explanation; Medical imaging analysis
Full Paper: 236 Downloads, 377 View
|