Dimensionality-induced information loss of outliers in deep neural networks

Autor: Uematsu, Kazuki, Haruki, Kosuke, Suzuki, Taiji, Kimura, Mitsuhiro, Takimoto, Takahiro, Nakagawa, Hideyuki
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
DOI: 10.1007/978-3-031-70341-6_9
Popis: Out-of-distribution (OOD) detection is a critical issue for the stable and reliable operation of systems using a deep neural network (DNN). Although many OOD detection methods have been proposed, it remains unclear how the differences between in-distribution (ID) and OOD samples are generated by each processing step inside DNNs. We experimentally clarify this issue by investigating the layer dependence of feature representations from multiple perspectives. We find that intrinsic low dimensionalization of DNNs is essential for understanding how OOD samples become more distinct from ID samples as features propagate to deeper layers. Based on these observations, we provide a simple picture that consistently explains various properties of OOD samples. Specifically, low-dimensional weights eliminate most information from OOD samples, resulting in misclassifications due to excessive attention to dataset bias. In addition, we demonstrate the utility of dimensionality by proposing a dimensionality-aware OOD detection method based on alignment of features and weights, which consistently achieves high performance for various datasets with lower computational cost.
Comment: This preprint has not undergone peer review (when applicable) or any post-submission improvements or corrections. The Version of Record of this contribution is published in ECML PKDD 2024, and is available online at https://doi.org/10.1007/978-3-031-70341-6_9
Databáze: arXiv