Explainable AI: The new 42?

Autor: Katharina Holzinger, Peter Kieseberg, Freddy Lecue, Ajay Chander, Simone Stumpf, Randy Goebel, Zeynep Akata, Andreas Holzinger
Přispěvatelé: Alberta Machine Intelligence Institute (Amii), University of Alberta, Fujitsu Laboratories of America, Inc., Sunnyvale CA, SBA Research, Accenture Labs [Ireland], Web-Instrumented Man-Machine Interactions, Communities and Semantics (WIMMICS), Inria Sophia Antipolis - Méditerranée (CRISAM), Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Scalable and Pervasive softwARe and Knowledge Systems (Laboratoire I3S - SPARKS), Laboratoire d'Informatique, Signaux, et Systèmes de Sophia Antipolis (I3S), Université Nice Sophia Antipolis (... - 2019) (UNS), COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-Centre National de la Recherche Scientifique (CNRS)-Université Côte d'Azur (UCA)-Université Nice Sophia Antipolis (... - 2019) (UNS), COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-Centre National de la Recherche Scientifique (CNRS)-Université Côte d'Azur (UCA)-Laboratoire d'Informatique, Signaux, et Systèmes de Sophia Antipolis (I3S), COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-Centre National de la Recherche Scientifique (CNRS)-Université Côte d'Azur (UCA), Max Planck Institute for Informatics [Saarbrücken], Amsterdam Machine Learning Lab (AMLab), University of Amsterdam [Amsterdam] (UvA), City University of London, St. Pölten University of Applied Sciences, Institut für Medizinische Informatik, Statistik und Dokumentation [Graz] (IMI), Medical University Graz, Institute of Interactive Systems and Data Science (ISDS), Graz University of Technology [Graz] (TU Graz), Andreas Holzinger, Peter Kieseberg, A Min Tjoa, Edgar Weippl, TC 5, TC 8, TC 12, WG 8.4, WG 8.9, WG 12.9, Université Nice Sophia Antipolis (1965 - 2019) (UNS), COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-Centre National de la Recherche Scientifique (CNRS)-Université Côte d'Azur (UCA)-Université Nice Sophia Antipolis (1965 - 2019) (UNS)
Jazyk: angličtina
Rok vydání: 2018
Předmět:
Zdroj: Lecture Notes in Computer Science
2nd International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE)
2nd International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE), Aug 2018, Hamburg, Germany. pp.295-303, ⟨10.1007/978-3-319-99740-7_21⟩
Lecture Notes in Computer Science ISBN: 9783319997391
CD-MAKE
ISSN: 0302-9743
DOI: 10.1007/978-3-319-99740-7_21⟩
Popis: Part 5: MAKE Explainable AI; International audience; Explainable AI is not a new field. Since at least the early exploitation of C.S. Pierce's abductive reasoning in expert systems of the 1980s, there were reasoning architectures to support an explanation function for complex AI systems, including applications in medical diagnosis , complex multi-component design, and reasoning about the real world. So explainability is at least as old as early AI, and a natural consequence of the design of AI systems. While early expert systems consisted of handcrafted knowledge bases that enabled reasoning over narrowly well-defined domains (e.g., INTERNIST, MYCIN), such systems had no learning capabilities and had only primitive uncertainty handling. But the evolution of formal reasoning architectures to incorporate principled probabilistic reasoning helped address the capture and use of uncertain knowledge. There has been recent and relatively rapid success of AI/machine learning solutions arises from neural network architectures. A new generation of neural methods now scale to exploit the practical applicability of statistical and algebraic learning approaches in arbitrarily high dimensional spaces. But despite their huge successes, largely in problems which can be cast as classification problems, their effectiveness is still limited by their un-debuggability, and their inability to "explain" their decisions in a human understandable and reconstructable way. So while AlphaGo or DeepStack can crush the best humans at Go or Poker, neither program has any internal model of its task; its representations defy interpretation by humans, there is no mechanism to explain their actions and behaviour, and furthermore, there is no obvious instructional value... the high performance systems can not help humans improve. Even when we understand the underlying mathematical scaffolding of current machine learning architectures, it is often impossible to get insight into the internal working of the models; we need explicit modeling and reasoning tools to explain how and why a result was achieved. We also know that a significant challenge for future AI is contextual adaptation, i.e., systems that incrementally help to construct explanatory models for solving real-world problems. Here it would be beneficial not to exclude human expertise, but to augment human intelligence with artificial intelligence.
Databáze: OpenAIRE