Explainable Artificial Intelligence (xAI)
Workshop Objective:
This hybrid half-day workshop will explore key advancements in explainable AI (xAI) at the intersection of AI, Mathematical Sciences, Engineering, and Economics. Experts will discuss both theoretical foundations and practical applications, focusing on making AI models more transparent and interpretable, which is essential for building trust in complex AI systems.
Preliminary list of speakers
Prof. Dr. Gitta Kutyniok:
Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence at the Ludwig-Maximilians-Universität München. (Website: https://www.ai.math.uni-muenchen.de/members/professor/index.html)
Title: Mathematical Algorithm Design for Deep Learning under Societal and Judicial Constraints: The Algorithmic Transparency Requirement
Abstract: Deep learning still has drawbacks in terms of trustworthiness, which describes a comprehensible, fair, safe, and reliable method. To mitigate the potential risk of AI, clear obligations associated to trustworthiness have been proposed via regulatory guidelines, e.g., in the European AI Act. Therefore, a central question is to what extent trustworthy deep learning can be realized. Establishing the described properties constituting trustworthiness requires that the factors influencing an algorithmic computation can be retraced, i.e., the algorithmic implementation is transparent. Motivated by the observation that the current evolution of deep learning models necessitates a change in computing technology, we derive a mathematical framework which enables us to analyze whether a transparent implementation in a computing model is feasible. We exemplarily apply our trustworthiness framework to analyze deep learning approaches for inverse problems in digital and analog computing models represented by Turing and Blum-Shub-Smale Machines, respectively. Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems under fairly general conditions, whereas Turing machines cannot guarantee trustworthiness to the same degree.
-----------------------------------------------------------------------------------------------------------------
Maximilian Fleissner
PhD candidate at TUM, School of computations, information and technology (website: https://www.cs.cit.tum.de/en/tfai/home/)
Title: Explaining (Kernel) Clustering via Decision Trees
Abstract: Despite the growing popularity of explainable and interpretable machine learning, there is still surprisingly limited work on inherently interpretable clustering methods. Recently, there has been a surge of interest in explaining the classic k-means algorithm using axis-aligned decision trees. However, interpretable variants of k-means have limited applicability in practice, where more flexible clustering methods are often needed to obtain useful partitions of the data. We investigate interpretable kernel clustering, and propose algorithms that construct decision trees to approximate the partitions induced by kernel k-means, a nonlinear extension of k-means. Our method attains worst-case bounds on the clustering cost induced by the tree.In addition, we introduce the notion of an explainability-to-noise ratio for mixture models. Assuming sub-Gaussianity of the mixture components, we derive upper and lower bounds on the error rate of a suitably constructed decision tree, capturing the intuition that well-clustered data can indeed be explained well with a decision tree.
-----------------------------------------------------------------------------------------------------------------
Dr. Vikram Sunkara
Head of Explainable A.I. for Biology, Zuse Institute Berlin (website: https://www.zib.de/members/sunkara )
Title: The mathematical aspects of xAI and how it can be used in applications