Leiter der Presse- und Öffentlichkeitsarbeit
Dr. Dr. Jens Simon
Telefon: (0531) 592-3005
E-Mail:
jens.simon(at)ptb.de
Artificial intelligence and machine learning tools hold promise to assist or drive high-stakes decisions in areas such as finance, medicine, or autonomous driving. The upcoming AI Act will require that the principles by which such algorithms arrive at their predictions should be transparent. However, the field of XAI is lacking formal problem specifications and theoretical results. In a new study, working group 8.44 provides analytical insight into the behavior of different XAI methods in a simple toy setting, outlining possible misinterpretations.
The new publication on the foundations of explainable AI (XAI) considers a 2D classification problem in which only one of the two features has a statistical association with the prediction target, while the other one is a "suppressor" feature that enhances the model's prediction performance but is itself not predictive. We derive analytical expressions for some of the most popular XAI methods, showing that the majority of methods assigns non-zero importance to the suppressor. One can easily imagine that the detected influence of the suppressor could be misinterpreted, a result of the lack of formal specification of the XAI problem in general. The study has been presented at the prestigious International Conference on Machine Learning (ICML) this year.
Related publication: Wilming, R., Kieslich, L., Clark, B., & Haufe, S. (2023). Theoretical Behavior of XAI Methods in the Presence of Suppressor Variables. Proceedings of the 40th International Conference on Machine Learning (ICML), published in Proceedings of Machine Learning Research (PMLR), 202:37091-37107.
Dr. Dr. Jens Simon
Telefon: (0531) 592-3005
E-Mail:
jens.simon(at)ptb.de
Karin Conring
Telefon: (0531) 592-3006
Fax: (0531) 592-3008
E-Mail:
karin.conring(at)ptb.de
Physikalisch-Technische Bundesanstalt
Bundesallee 100
38116 Braunschweig