Quantifying the Confidence of Anomaly Detectors in Their Example-Wise Predictions
Autor: | Lorenzo Perini, Jesse Davis, Vincent Vercruyssen |
---|---|
Přispěvatelé: | Hutter, F, Kersting, K, Lijffijt, J, Valera, I |
Rok vydání: | 2021 |
Předmět: |
Computer science
Anomaly (natural sciences) Bayesian probability Detector 010501 environmental sciences computer.software_genre 01 natural sciences 010104 statistics & probability Convergence (routing) Benchmark (computing) Anomaly detection Data mining 0101 mathematics computer Reliability (statistics) 0105 earth and related environmental sciences Interpretability |
Zdroj: | Machine Learning and Knowledge Discovery in Databases ISBN: 9783030676636 ECML/PKDD (3) |
Popis: | Anomaly detection focuses on identifying examples in the data that somehow deviate from what is expected or typical. Algorithms for this task usually assign a score to each example that represents how anomalous the example is. Then, a threshold on the scores turns them into concrete predictions. However, each algorithm uses a different approach to assign the scores, which makes them difficult to interpret and can quickly erode a user’s trust in the predictions. This paper introduces an approach for assessing the reliability of any anomaly detector’s example-wise predictions. To do so, we propose a Bayesian approach for converting anomaly scores to probability estimates. This enables the anomaly detector to assign a confidence score to each prediction which captures its uncertainty in that prediction. We theoretically analyze the convergence behaviour of our confidence estimate. Empirically, we demonstrate the effectiveness of the framework in quantifying a detector’s confidence in its predictions on a large benchmark of datasets. |
Databáze: | OpenAIRE |
Externí odkaz: |