Logo der Physikalisch-Technischen Bundesanstalt

Study on the analytical behavior of explainable AI methods published

28.08.2023

Artificial intelligence and machine learning tools hold promise to assist or drive high-stakes decisions in areas such as finance, medicine, or autonomous driving. The upcoming AI Act will require that the principles by which such algorithms arrive at their predictions should be transparent. However, the field of XAI is lacking formal problem specifications and theoretical results. In a new study, working group 8.44 provides analytical insight into the behavior of different XAI methods in a simple toy setting, outlining possible misinterpretations.

 

The new publication on the foundations of explainable AI (XAI) considers a 2D classification problem in which only one of the two features has a statistical association with the prediction target, while the other one is a "suppressor" feature that enhances the model's prediction performance but is itself not predictive. We derive analytical expressions for some of the most popular XAI methods, showing that the majority of methods assigns non-zero importance to the suppressor. One can easily imagine that the detected influence of the suppressor could be misinterpreted, a result of the lack of formal specification of the XAI problem in general. The study has been presented at the prestigious International Conference on Machine Learning (ICML) this year. 

Kontakt

Leiter der Presse- und Öffentlichkeitsarbeit

Dr. Dr. Jens Simon

Telefon: (0531) 592-3005
E-Mail:
jens.simon(at)ptb.de

Anschrift

Physikalisch-Technische Bundesanstalt
Bundesallee 100
38116 Braunschweig