The Misclassification Likelihood Matrix: Some Classes Are More Likely To Be Misclassified Than Others

Autor: Sikar, Daniel, Garcez, Artur, Bloomfield, Robin, Weyde, Tillman, Peeroo, Kaleem, Singh, Naman, Hutchinson, Maeve, Laksono, Dany, Reljan-Delaney, Mirela
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: This study introduces the Misclassification Likelihood Matrix (MLM) as a novel tool for quantifying the reliability of neural network predictions under distribution shifts. The MLM is obtained by leveraging softmax outputs and clustering techniques to measure the distances between the predictions of a trained neural network and class centroids. By analyzing these distances, the MLM provides a comprehensive view of the model's misclassification tendencies, enabling decision-makers to identify the most common and critical sources of errors. The MLM allows for the prioritization of model improvements and the establishment of decision thresholds based on acceptable risk levels. The approach is evaluated on the MNIST dataset using a Convolutional Neural Network (CNN) and a perturbed version of the dataset to simulate distribution shifts. The results demonstrate the effectiveness of the MLM in assessing the reliability of predictions and highlight its potential in enhancing the interpretability and risk mitigation capabilities of neural networks. The implications of this work extend beyond image classification, with ongoing applications in autonomous systems, such as self-driving cars, to improve the safety and reliability of decision-making in complex, real-world environments.
Comment: 9 pages, 7 figures, 1 table
Databáze: arXiv