Embodied Language Grounding With 3D Visual Feature Representations
Autor: | Maximilian Sieb, Hsiao-Yu Fish Tung, Mihir Prabhudesai, Syed Ashar Javed, Adam W. Harley, Katerina Fragkiadaki |
---|---|
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Computer science Computer Vision and Pattern Recognition (cs.CV) Feature vector Feature extraction Computer Science - Computer Vision and Pattern Recognition ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION 02 engineering and technology 010501 environmental sciences 01 natural sciences Machine Learning (cs.LG) Computer Science - Robotics Margin (machine learning) 0202 electrical engineering electronic engineering information engineering Computer vision 0105 earth and related environmental sciences business.industry Object (computer science) Visualization Feature (computer vision) RGB color model 020201 artificial intelligence & image processing Artificial intelligence business Robotics (cs.RO) Utterance |
Zdroj: | CVPR |
DOI: | 10.1109/cvpr42600.2020.00229 |
Popis: | We propose associating language utterances to 3D visual abstractions of the scene they describe. The 3D visual abstractions are encoded as 3-dimensional visual feature maps. We infer these 3D visual scene feature maps from RGB images of the scene via view prediction: when the generated 3D scene feature map is neurally projected from a camera viewpoint, it should match the corresponding RGB image. We present generative models that condition on the dependency tree of an utterance and generate a corresponding visual 3D feature map as well as reason about its plausibility, and detector models that condition on both the dependency tree of an utterance and a related image and localize the object referents in the 3D feature map inferred from the image. Our model outperforms models of language and vision that associate language with 2D CNN activations or 2D images by a large margin in a variety of tasks, such as, classifying plausibility of utterances, detecting referential expressions, and supplying rewards for trajectory optimization of object placement policies from language instructions. We perform numerous ablations and show the improved performance of our detectors is due to its better generalization across camera viewpoints and lack of object interferences in the inferred 3D feature space, and the improved performance of our generators is due to their ability to spatially reason about objects and their configurations in 3D when mapping from language to scenes. |
Databáze: | OpenAIRE |
Externí odkaz: |