Improved deep depth estimation for environments with sparse visual cues
Autor: | Niclas Joswig, Juuso Autiosalo, Laura Ruotsalainen |
---|---|
Přispěvatelé: | University of Helsinki, Department of Electronics and Nanoengineering, Department of Mechanical Engineering, Aalto-yliopisto, Aalto University, Department of Computer Science, Spatiotemporal Data Analysis, Doctoral Programme in Computer Science, SUSTAINABLE URBAN DEVELOPMENT EMERGING FROM THE MERGER OF CUTTING-EDGE CLIMATE, SOCIAL AND COMPUTER SCIENCES, Helsinki Institute of Sustainability Science (HELSUS) |
Rok vydání: | 2023 |
Předmět: | |
Zdroj: | Machine Vision and Applications. 34 |
ISSN: | 1432-1769 0932-8092 |
DOI: | 10.1007/s00138-022-01364-0 |
Popis: | Funding Information: This work has been supported by a donation from Konecranes, Finnish Center for Artificial Intelligence (FCAI), the University of Helsinki and Aalto University. Publisher Copyright: © 2022, The Author(s). Most deep learning-based depth estimation models that learn scene structure self-supervised from monocular video base their estimation on visual cues such as vanishing points. In the established depth estimation benchmarks depicting, for example, street navigation or indoor offices, these cues can be found consistently, which enables neural networks to predict depth maps from single images. In this work, we are addressing the challenge of depth estimation from a real-world bird’s-eye perspective in an industry environment which contains, conditioned by its special geometry, a minimal amount of visual cues and, hence, requires incorporation of the temporal domain for structure from motion estimation. To enable the system to incorporate structure from motion from pixel translation when facing context-sparse, i.e., visual cue sparse, scenery, we propose a novel architecture built upon the structure from motion learner, which uses temporal pairs of jointly unrotated and stacked images for depth prediction. In order to increase the overall performance and to avoid blurred depth edges that lie in between the edges of the two input images, we integrate a geometric consistency loss into our pipeline. We assess the model’s ability to learn structure from motion by introducing a novel industry dataset whose perspective, orthogonal to the floor, contains only minimal visual cues. Through the evaluation with ground truth depth, we show that our proposed method outperforms the state of the art in difficult context-sparse environments. |
Databáze: | OpenAIRE |
Externí odkaz: |