Multi-Level Sensor Fusion with Deep Learning
Autor: | Valentin Vielzeuf, Alexis Lechervy, Stéphane Pateux, Frédéric Jurie |
---|---|
Přispěvatelé: | Orange Labs R&D [Rennes], France Télécom, Equipe Image - Laboratoire GREYC - UMR6072, Groupe de Recherche en Informatique, Image et Instrumentation de Caen (GREYC), Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Ingénieurs de Caen (ENSICAEN), Normandie Université (NU)-Normandie Université (NU)-Université de Caen Normandie (UNICAEN), Normandie Université (NU)-Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Ingénieurs de Caen (ENSICAEN), Normandie Université (NU) |
Předmět: | |
Zdroj: | HAL IEEE Sensors Letters IEEE Sensors Letters, IEEE, 2018, 3 (1) |
ISSN: | 2475-1472 |
Popis: | In the context of deep learning, this article presents an original deep network, namely CentralNet, for the fusion of information coming from different sensors. This approach is designed to efficiently and automatically balance the trade-off between early and late fusion (i.e. between the fusion of low-level vs high-level information). More specifically, at each level of abstraction-the different levels of deep networks-uni-modal representations of the data are fed to a central neural network which combines them into a common embedding. In addition, a multi-objective regularization is also introduced, helping to both optimize the central network and the unimodal networks. Experiments on four multimodal datasets not only show state-of-the-art performance, but also demonstrate that CentralNet can actually choose the best possible fusion strategy for a given problem. Comment: arXiv admin note: text overlap with arXiv:1808.07275 |
Databáze: | OpenAIRE |
Externí odkaz: |