Undoing the Damage of Dataset Bias
Autor: | Tomasz Malisiewicz, Tinghui Zhou, Aditya Khosla, Alexei A. Efros, Antonio Torralba |
---|---|
Rok vydání: | 2012 |
Předmět: |
FOS: Computer and information sciences
Generalization business.industry Computer science Cognitive neuroscience of visual object recognition 80101 Adaptive Agents and Intelligent Robotics Machine learning computer.software_genre Undoing Support vector machine ComputingMethodologies_PATTERNRECOGNITION Discriminative model Object model Artificial intelligence Transfer of learning business computer |
Zdroj: | Computer Vision – ECCV 2012 ISBN: 9783642337178 ECCV (1) |
DOI: | 10.1184/r1/6561461 |
Popis: | The presence of bias in existing object recognition datasets is now well-known in the computer vision community. While it remains in question whether creating an unbiased dataset is possible given limited resources, in this work we propose a discriminative framework that directly exploits dataset bias during training. In particular, our model learns two sets of weights: (1) bias vectors associated with each individual dataset, and (2) visual world weights that are common to all datasets, which are learned by undoing the associated bias from each dataset. The visual world weights are expected to be our best possible approximation to the object model trained on an unbiased dataset, and thus tend to have good generalization ability. We demonstrate the effectiveness of our model by applying the learned weights to a novel, unseen dataset, and report superior results for both classification and detection tasks compared to a classical SVM that does not account for the presence of bias. Overall, we find that it is beneficial to explicitly account for bias when combining multiple datasets. |
Databáze: | OpenAIRE |
Externí odkaz: |