Grounded language acquisition through the eyes and ears of a single child.

Autor: Vong WK; Center for Data Science, New York University, New York, NY, USA., Wang W; Center for Data Science, New York University, New York, NY, USA., Orhan AE; Center for Data Science, New York University, New York, NY, USA., Lake BM; Center for Data Science, New York University, New York, NY, USA.; Department of Psychology, New York University, New York, NY, USA.
Jazyk: angličtina
Zdroj: Science (New York, N.Y.) [Science] 2024 Feb 02; Vol. 383 (6682), pp. 504-511. Date of Electronic Publication: 2024 Feb 01.
DOI: 10.1126/science.adi1374
Abstrakt: Starting around 6 to 9 months of age, children begin acquiring their first words, linking spoken words to their visual counterparts. How much of this knowledge is learnable from sensory input with relatively generic learning mechanisms, and how much requires stronger inductive biases? Using longitudinal head-mounted camera recordings from one child aged 6 to 25 months, we trained a relatively generic neural network on 61 hours of correlated visual-linguistic data streams, learning feature-based representations and cross-modal associations. Our model acquires many word-referent mappings present in the child's everyday experience, enables zero-shot generalization to new visual referents, and aligns its visual and linguistic conceptual systems. These results show how critical aspects of grounded word meaning are learnable through joint representation and associative learning from one child's input.
Databáze: MEDLINE
Nepřihlášeným uživatelům se plný text nezobrazuje