Logo of the Physikalisch-Technische Bundesanstalt

Mathematical Modelling and Data Analysis

Department 8.4

Publication single view


Title: Benchmarking the influence of pre-training on explanation performance in MR image classification
Author(s): M. Oliveira, R. Wilming, B. Clark, C. Budding, F. Eitel, K. Ritter and S. Haufe
Journal: Frontiers in Artificial Intelligence
Year: 2024
Volume: 7
DOI: 10.3389/frai.2024.1330919
ISSN: 2624-8212
Web URL: https://www.frontiersin.org/articles/10.3389/frai.2024.1330919
Tags: 8.4,8.44
Abstract: Convolutional Neural Networks (CNNs) are frequently and successfully used in medical prediction tasks. They are often used in combination with transfer learning, leading to improved performance when training data for the task are scarce. The resulting models are highly complex and typically do not provide any insight into their predictive mechanisms, motivating the field of “explainable” artificial intelligence (XAI). However, previous studies have rarely quantitatively evaluated the “explanation performance” of XAI methods against ground-truth data, and transfer learning and its influence on objective measures of explanation performance has not been investigated. Here, we propose a benchmark dataset that allows for quantifying explanation performance in a realistic magnetic resonance imaging (MRI) classification task. We employ this benchmark to understand the influence of transfer learning on the quality of explanations. Experimental results show that popular XAI methods applied to the same underlying model differ vastly in performance, even when considering only correctly classified examples. We further observe that explanation performance strongly depends on the task used for pre-training and the number of CNN layers pre-trained. These results hold after correcting for a substantial correlation between explanation and classification performance.

Back to the list view