Autor: |
Vong WK; Center for Data Science, New York University, New York, NY, USA., Wang W; Center for Data Science, New York University, New York, NY, USA., Orhan AE; Center for Data Science, New York University, New York, NY, USA., Lake BM; Center for Data Science, New York University, New York, NY, USA.; Department of Psychology, New York University, New York, NY, USA. |
Abstrakt: |
Starting around 6 to 9 months of age, children begin acquiring their first words, linking spoken words to their visual counterparts. How much of this knowledge is learnable from sensory input with relatively generic learning mechanisms, and how much requires stronger inductive biases? Using longitudinal head-mounted camera recordings from one child aged 6 to 25 months, we trained a relatively generic neural network on 61 hours of correlated visual-linguistic data streams, learning feature-based representations and cross-modal associations. Our model acquires many word-referent mappings present in the child's everyday experience, enables zero-shot generalization to new visual referents, and aligns its visual and linguistic conceptual systems. These results show how critical aspects of grounded word meaning are learnable through joint representation and associative learning from one child's input. |