Are open set classification methods effective on large-scale datasets?
Autor: | Ronald Kemker, Tyler L. Hayes, Ayesha Gonzales, Christopher Kanan, Ryne Roady |
---|---|
Rok vydání: | 2020 |
Předmět: |
Computer science
Inference Datasets as Topic 02 engineering and technology 010501 environmental sciences computer.software_genre 01 natural sciences Regularization (mathematics) Convolutional neural network Pattern Recognition Automated Machine Learning 0202 electrical engineering electronic engineering information engineering Image Processing Computer-Assisted Data Management Multidisciplinary Training set Artificial neural network Covariance Applied Mathematics Simulation and Modeling Classification Physical Sciences Medicine 020201 artificial intelligence & image processing Algorithms Research Article Computer and Information Sciences Neural Networks Imaging Techniques Feature vector Science Open set Machine learning Research and Analysis Methods Robustness (computer science) Artificial Intelligence Support Vector Machines Humans 0105 earth and related environmental sciences business.industry Data Visualization Biology and Life Sciences Random Variables Probability Theory Probability Distribution Test set Artificial intelligence Neural Networks Computer business computer Mathematics Neuroscience |
Zdroj: | PLoS ONE PLoS ONE, Vol 15, Iss 9, p e0238302 (2020) |
ISSN: | 1932-6203 |
Popis: | Supervised classification methods often assume the train and test data distributions are the same and that all classes in the test set are present in the training set. However, deployed classifiers often require the ability to recognize inputs from outside the training set as unknowns. This problem has been studied under multiple paradigms including out-of-distribution detection and open set recognition. For convolutional neural networks, there have been two major approaches: 1) inference methods to separate knowns from unknowns and 2) feature space regularization strategies to improve model robustness to novel inputs. Up to this point, there has been little attention to exploring the relationship between the two approaches and directly comparing performance on large-scale datasets that have more than a few dozen categories. Using the ImageNet ILSVRC-2012 large-scale classification dataset, we identify novel combinations of regularization and specialized inference methods that perform best across multiple open set classification problems of increasing difficulty level. We find that input perturbation and temperature scaling yield significantly better performance on large-scale datasets than other inference methods tested, regardless of the feature space regularization strategy. Conversely, we find that improving performance with advanced regularization schemes during training yields better performance when baseline inference techniques are used; however, when advanced inference methods are used to detect open set classes, the utility of these combersome training paradigms is less evident. |
Databáze: | OpenAIRE |
Externí odkaz: | |
Nepřihlášeným uživatelům se plný text nezobrazuje | K zobrazení výsledku je třeba se přihlásit. |