Evaluation of Importance Estimators in Deep Learning Classifiers for Computed Tomography - Université de Bretagne Occidentale Access content directly
Proceedings Year : 2022

Evaluation of Importance Estimators in Deep Learning Classifiers for Computed Tomography

Lennart Brocki
  • Function : Author
Wistan Marchadour
  • Function : Author
Jonas Maison
  • Function : Author
Bogdan Badic
  • Function : Author
Panagiotis Papadimitroulas
  • Function : Author
Mathieu Hatt
  • Function : Author
Neo Christopher Chung
  • Function : Author

Abstract

Deep learning has shown superb performance in detecting objects and classifying images, ensuring a great promise for analyzing medical imaging. Translating the success of deep learning to medical imaging, in which doctors need to understand the underlying process, requires the capability to interpret and explain the prediction of neural networks. Interpretability of deep neural networks often relies on estimating the importance of input features (e.g., pixels) with respect to the outcome (e.g., class probability). However, a number of importance estimators (also known as saliency maps) have been developed and it is unclear which ones are more relevant for medical imaging applications. In the present work, we investigated the performance of several importance estimators in explaining the classification of computed tomography (CT) images by a convolutional deep network, using three distinct evaluation metrics. Specifically, the ResNet-50 was trained to classify CT scans of lungs acquired with and without contrast agents, in which clinically relevant anatomical areas were manually determined by experts as segmentation masks in the images. Three evaluation metrics were used to quantify different aspects of interpretability. First, the model-centric fidelity measures a decrease in the model accuracy when certain inputs are perturbed. Second, concordance between importance scores and the expert-defined segmentation masks is measured on a pixel level by a receiver operating characteristic (ROC) curves. Third, we measure a region-wise overlap between a XRAI-based map and the segmentation mask by Dice Similarity Coefficients (DSC). Overall, two versions of SmoothGrad topped the fidelity and ROC rankings, whereas both Integrated Gradients and SmoothGrad excelled in DSC evaluation. Interestingly, there was a critical discrepancy between model-centric (fidelity) and human-centric (ROC and DSC) evaluation. Expert expectation and intuition embedded in segmentation maps does not necessarily align with how the model arrived at its prediction. Understanding this difference in interpretability would help harnessing the power of deep learning in medicine.

Dates and versions

hal-04307817 , version 1 (26-11-2023)

Identifiers

Cite

Lennart Brocki, Wistan Marchadour, Jonas Maison, Bogdan Badic, Panagiotis Papadimitroulas, et al.. Evaluation of Importance Estimators in Deep Learning Classifiers for Computed Tomography. 13283, Springer International Publishing, pp.3-18, 2022, Lecture Notes in Computer Science, ⟨10.1007/978-3-031-15565-9_1⟩. ⟨hal-04307817⟩
21 View
0 Download

Altmetric

Share

Gmail Mastodon Facebook X LinkedIn More