Visually Grounded Speech Models have a Mutual Exclusivity Bias

Autor: Nortje, Leanne, Oneaţă, Dan, Matusevych, Yevgen, Kamper, Herman
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: When children learn new words, they employ constraints such as the mutual exclusivity (ME) bias: a novel word is mapped to a novel object rather than a familiar one. This bias has been studied computationally, but only in models that use discrete word representations as input, ignoring the high variability of spoken words. We investigate the ME bias in the context of visually grounded speech models that learn from natural images and continuous speech audio. Concretely, we train a model on familiar words and test its ME bias by asking it to select between a novel and a familiar object when queried with a novel word. To simulate prior acoustic and visual knowledge, we experiment with several initialisation strategies using pretrained speech and vision networks. Our findings reveal the ME bias across the different initialisation approaches, with a stronger bias in models with more prior (in particular, visual) knowledge. Additional tests confirm the robustness of our results, even when different loss functions are considered.
Comment: Accepted to TACL, pre-MIT Press publication version
Databáze: arXiv