Zobrazeno 1 - 10
of 19
pro vyhledávání: '"Alexis Conneau"'
End-to-end speech-to-speech translation (S2ST) without relying on intermediate text representations is a rapidly emerging frontier of research. Recent works have demonstrated that the performance of such direct S2ST systems is approaching that of con
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::8256e1b689fe46f847abaa17c546580f
http://arxiv.org/abs/2203.13339
http://arxiv.org/abs/2203.13339
Autor:
Alexis Conneau, Ankur Bapna, Yu Zhang, Min Ma, Patrick von Platen, Anton Lozhkov, Colin Cherry, Ye Jia, Clara Rivera, Mihir Kale, Daan van Esch, Vera Axelrod, Simran Khanuja, Jonathan Clark, Orhan Firat, Michael Auli, Sebastian Ruder, Jason Riesa, Melvin Johnson
We introduce XTREME-S, a new benchmark to evaluate universal cross-lingual speech representations in many languages. XTREME-S covers four task families: speech recognition, classification, speech-to-text translation and retrieval. Covering 102 langua
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::7d5112276248b7b6fa2427d9ed857587
Autor:
Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli
This paper presents XLS-R, a large-scale model for cross-lingual speech representation learning based on wav2vec 2.0. We train models with up to 2B parameters on nearly half a million hours of publicly available speech audio in 128 languages, an orde
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::d9d1740f72468c26efa5658360ae3194
http://arxiv.org/abs/2111.09296
http://arxiv.org/abs/2111.09296
Autor:
Michael Auli, Alexis Conneau, Tatiana Likhomanenko, Qiantong Xu, Paden Tomasello, Gabriel Synnaeve, Alexei Baevski, Ronan Collobert
Publikováno v:
ICASSP
Self-training and unsupervised pre-training have emerged as effective approaches to improve speech recognition systems using unlabeled data. However, it is not clear whether they learn similar patterns or if they can be effectively combined. In this
In this paper, we improve speech translation (ST) through effectively leveraging large quantities of unlabeled speech and text data in different and complementary ways. We explore both pretraining and self-training by using the large Libri-Light spee
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::07c7d6ac118095cd41352bc1dc5f878c
Autor:
Juan Pino, Alexei Baevski, Chau Tran, Yuqing Tang, Michael Auli, Xian Li, Yun Tang, Changhan Wang, Alexis Conneau
Publikováno v:
ACL/IJCNLP (1)
We present a simple yet effective approach to build multilingual speech-to-text (ST) translation through efficient transfer learning from a pretrained speech encoder and text decoder. Our key finding is that a minimalistic LNA (LayerNorm and Attentio
Autor:
Andros Tjandra, Diptanu Gon Choudhury, Frank Zhang, Kritika Singh, Alexis Conneau, Alexei Baevski, Assaf Sela, Yatharth Saraf, Michael Auli
Language identification greatly impacts the success of downstream tasks such as automatic speech recognition. Recently, self-supervised speech representations learned by wav2vec 2.0 have been shown to be very effective for a range of speech tasks. We
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::aede8a69be1d1d10628f5d34d47a2564
Publikováno v:
Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021).
Recent work has demonstrated the effectiveness of cross-lingual language model pretraining for cross-lingual understanding. In this study, we present the results of two larger multilingual masked language models, with 3.5B and 10.7B parameters. Our t
Autor:
Edouard Grave, Veselin Stoyanov, Vishrav Chaudhary, Beliz Gunel, Jingfei Du, Onur Celebi, Michael Auli, Alexis Conneau
Publikováno v:
NAACL-HLT
Unsupervised pre-training has led to much recent progress in natural language understanding. In this paper, we study self-training as another way to leverage unlabeled data through semi-supervised learning. To obtain additional data for a specific ta
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::35d9966a169360a0ecb857f71449e7a5
Autor:
Edouard Grave, Vishrav Chaudhary, Guillaume Wenzek, Luke Zettlemoyer, Kartikay Khandelwal, Veselin Stoyanov, Naman Goyal, Myle Ott, Alexis Conneau, Francisco Guzmán
Publikováno v:
ACL
This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more t