Towards Human-Interpretable Prototypes for Visual Assessment of Image Classification Models
Autor: | Sinhamahapatra, Poulami, Heidemann, Lena, Monnet, Maureen, Roscher, Karsten |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2022 |
Předmět: |
FOS: Computer and information sciences
prototype-based learning Computer Science - Machine Learning classification AI trustworthy AI Computer Vision and Pattern Recognition (cs.CV) Computer Science - Computer Vision and Pattern Recognition interpretability global explainability artificial intelligence safety-critical Machine Learning (cs.LG) |
Popis: | Explaining black-box Artificial Intelligence (AI) models is a cornerstone for trustworthy AI and a prerequisite for its use in safety critical applications such that AI models can reliably assist humans in critical decisions. However, instead of trying to explain our models post-hoc, we need models which are interpretable-by-design built on a reasoning process similar to humans that exploits meaningful high-level concepts such as shapes, texture or object parts. Learning such concepts is often hindered by its need for explicit specification and annotation up front. Instead, prototype-based learning approaches such as ProtoPNet claim to discover visually meaningful prototypes in an unsupervised way. In this work, we propose a set of properties that those prototypes have to fulfill to enable human analysis, e.g. as part of a reliable model assessment case, and analyse such existing methods in the light of these properties. Given a ‘Guess who?’ game, we find that these prototypes still have a long way ahead towards definite explanations. We quantitatively validate our findings by conducting a user study indicating that many of the learnt prototypes are not considered useful towards human understanding. We discuss about the missing links in the existing methods and present a potential real-world application motivating the need to progress towards truly human-interpretable prototypes. |
Databáze: | OpenAIRE |
Externí odkaz: |