Cross-modal Language Generation using Pivot Stabilization for Web-scale Language Coverage
Autor: | Radu Soricut, Ashish V. Thapliyal |
---|---|
Rok vydání: | 2020 |
Předmět: |
Closed captioning
FOS: Computer and information sciences Computer Science - Machine Learning Computer science Computer Vision and Pattern Recognition (cs.CV) Computer Science - Computer Vision and Pattern Recognition 02 engineering and technology 010501 environmental sciences computer.software_genre Translation (geometry) 01 natural sciences Domain (software engineering) Image (mathematics) Machine Learning (cs.LG) 0202 electrical engineering electronic engineering information engineering Modal language 0105 earth and related environmental sciences Computer Science - Computation and Language business.industry 020201 artificial intelligence & image processing Artificial intelligence business Scale (map) computer Computation and Language (cs.CL) Natural language processing |
Zdroj: | ACL |
DOI: | 10.48550/arxiv.2005.00246 |
Popis: | Cross-modal language generation tasks such as image captioning are directly hurt in their ability to support non-English languages by the trend of data-hungry models combined with the lack of non-English annotations. We investigate potential solutions for combining existing language-generation annotations in English with translation capabilities in order to create solutions at web-scale in both domain and language coverage. We describe an approach called Pivot-Language Generation Stabilization (PLuGS), which leverages directly at training time both existing English annotations (gold data) as well as their machine-translated versions (silver data); at run-time, it generates first an English caption and then a corresponding target-language caption. We show that PLuGS models outperform other candidate solutions in evaluations performed over 5 different target languages, under a large-domain testset using images from the Open Images dataset. Furthermore, we find an interesting effect where the English captions generated by the PLuGS models are better than the captions generated by the original, monolingual English model. Comment: ACL 2020 |
Databáze: | OpenAIRE |
Externí odkaz: |