On Robustness and Explainability to reach Trustworthy AI

December 5, 2023, 5:30 p.m. (CET)

ELLIS Distinguished Lecture Series talk by Andreas Holzinger

Time: December 5, 2023, 5:30 p.m. – 7:30 p.m.
Meeting mode: online
Download as iCal:


Thanks to advances in statistical machine learning, AI is enjoying renewed popularity today. However, two properties are still in great need of improvement: a) robustness and b) explainability. In many application domains, the question of why a particular result was obtained is often more important than the result itself. This is directly related to robustness, because disturbances in the input data can have dramatic effects on the output and lead to completely different results. This is relevant in all critical areas where we work with real data from our environment, i.e. where we do not have i.i.d. laboratory data. Therefore, the use of AI in real-world domains that impact human life (agriculture, climate, forestry, health, etc.) has led to an increased demand for trustworthy AI. In sensitive areas where traceability, transparency and interpretability are required, explainable AI (XAI) is now even essential due to regulatory requirements. One approach to making AI more robust is to combine statistical learning with knowledge representations.  For certain tasks, it can be beneficial to include a human in the loop. A human expert can sometimes - not always, of course - bring experience and conceptual understanding to the AI pipeline.


Andreas Holzinger (University of Natural Resources and Life Sciences, Vienna) pioneered in interactive machine learning with the human-in-the-loop promoting robustness and explainability to foster trustworthy AI. He advocates a synergistic approach of Human-Centered AI.

(HCAI) to put the human in-control of AI, aligning artificial intelligence with human intelligence, human values, ethical principles, and legal requirements to ensure secure and safe human-machine interaction. For his achievements he was elected a member of Academia Europaea in 2019, the European Academy of Science, of the European Laboratory for Learning and Intelligent Systems (ELLIS) in 2020, and Fellow of the international federation of information processing (ifip) in 2021. Andreas Holzinger serves as consultant for the Canadian, US, UK, Swiss, French, Italian and Dutch governments, for the German Excellence Initiative, and as national expert in the European Commission (EC). Andreas is in the advisory board of the Artificial Intelligence Strategy AI made in Germany 2030  of the German Federal Government. He obtained his Ph.D. with a topic in Cognitive Science from Graz University in 1998, and his Habilitation (UOG 93, venia docendi) in Computer Science from Graz University of Technology in 2003. Andreas was Visiting Professor for Machine Learning & Knowledge Extraction in Verona (Italy), RWTH Aachen (Germany), and the University College London (UK).

From July 2019 until February 2022 Andreas was a Visiting Professor for explainable AI at the University of Alberta (Canada).  Andreas Holzinger has been appointed full professor for digital transformation in smart farm and forest operations at the University of Natural Resources and Life Sciences Vienna and had his inaugural lecture on November 7, 2022–see: https://human-centered.ai/antrittsvorlesung-andreas-holzinger/.

Meeting Link (Webex) 

To the top of the page