Language learning using Speech to Image retrieval
Autor: | Danny Merkx, Stefan L. Frank, Mirjam Ernestus |
---|---|
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Computation and Language Artificial neural network Computer science Speech recognition 02 engineering and technology Language acquisition Language & Communication Language in Interaction 030507 speech-language pathology & audiology 03 medical and health sciences Word recognition 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Active listening Speech Production and Comprehension Language & Speech Technology 0305 other medical science Image retrieval Computation and Language (cs.CL) GeneralLiterature_REFERENCE(e.g. dictionaries encyclopedias glossaries) Sentence Word (computer architecture) |
Zdroj: | Proceedings of Interspeech 2019 Proceedings of Interspeech 2019. Crossroads of Speech and Language, 1841-1845. [S.l.] : ISCA STARTPAGE=1841;ENDPAGE=1845;TITLE=Proceedings of Interspeech 2019. Crossroads of Speech and Language Proceedings of Interspeech 2019. Crossroads of Speech and Language, pp. 1841-1845 INTERSPEECH |
DOI: | 10.21437/interspeech.2019-3067 |
Popis: | Humans learn language by interaction with their environment and listening to other humans. It should also be possible for computational models to learn language directly from speech but so far most approaches require text. We improve on existing neural network approaches to create visually grounded embeddings for spoken utterances. Using a combination of a multi-layer GRU, importance sampling, cyclic learning rates, ensembling and vectorial self-attention our results show a remarkable increase in image-caption retrieval performance over previous work. Furthermore, we investigate which layers in the model learn to recognise words in the input. We find that deeper network layers are better at encoding word presence, although the final layer has slightly lower performance. This shows that our visually grounded sentence encoder learns to recognise words from the input even though it is not explicitly trained for word recognition. Comment: Submitted to InterSpeech 2019 |
Databáze: | OpenAIRE |
Externí odkaz: |