|June 22, 2023, 5:30 p.m. – 6:30 p.m.
|Prof. Dr. Ute Schmid
Link: AISA Colloquium
|Download as iCal:
Prof. Ute Schmid, University of Bamberg
Towards Trustworthy Human-AI Partnerships with Explanatory, Interactive and Neuro-symbolic Approaches to Machine Learning
For many practical applications of machine learning, it is appropriate or even necessary to make use of human expertise to compensate a too small amount or low quality of data. Taking into account knowledge which is available in explicit form reduces the amount of data needed for learning. Furthermore, even if domain experts cannot formulate knowledge explicitly, they typically can recognize and correct erroneous decisions or actions. This type of implicit knowledge can be injected into the learning process to guide model adaptation. These insights have led to the so-called third wave of AI with a focus on explainability (XAI). In the talk, I will introduce research on explanatory and interactive machine learning. I will argue that explanations are a necessary ingredient for justified trust in AI-systems and prerequisite for interactive AI. I will present inductive programming as a powerful approach to learning interpretable models in relational domains and discuss first ideas about how to combine deep learning and inductive programming for concept learning in relations domains.