Logo der Physikalisch-Technischen Bundesanstalt

Projekt

Advancing the theory and practice of machine learning model explanations in biomedicine

This project aims to advance both the theoretical foundation of xAI and the practical, in particular, clinical, utility of explanation methods. We will develop novel, useful, definitions of feature importance that can be leveraged to generate synthetic ground-truth data. These data will be used to quantitatively assess the "explanation performance" of existing xAI methods. 

In healthcare, as in other safety-critical domains, there is a strong desire – driven by clinical and scientific but also ethical and legal considerations – to understand how a given machine learning model arrives at a certain prediction, which motivates the field of explainable or interpretable artificial intelligence (xAI). This problem is inherently unsupervised, which means that the ground-truth cannot, even retrospectively, be obtained in practice. Validation of unsupervised methods is a prerequisite for applying such methods in clinical contexts - this principle must also hold for xAI methods. However, due to the lack of ground-truth information in real data, the vast literature on xAI resorts to subjective qualitative assessments or surrogate metrics, such as relative prediction accuracy, to demonstrate the "plausibility" of the provided explanations. Novel, theoretically founded, definitions of explainability along with appropriately designed synthetic ground-truth data are needed in order to benchmark existing xAI approaches as well as to drive the development of improved methods.

  • Machine Learning Group, Technische Universität Berlin 
  • Berlin Institute for the Foundations of Learning and Data (BIFOLD) 
  • Berlin Center for Advanced Neuroimaging (BCAN), Charité - Universitätsmedizin Berlin  

Publikationen

S. Haufe, F. Meinecke, K. Görgen, S. Dähne, J.-D. Haynes, B. Blankertz;F. Bießmann
NeuroImage,
2014.
Export als:
BibTeX, XML