Zobrazeno 1 - 10
of 59
pro vyhledávání: '"Changhan Wang"'
Autor:
Yongjun You, Chong Liu, Qi Xu, Xuefei Hu, Simin Zhang, Changhan Wang, Huanliang Guo, Chao Tang
Publikováno v:
Petroleum Science and Technology. 40:2082-2100
This study is the first to use a response surface method to optimize the main parameters of ultrasonic-assisted recovery of oil from oily sludge. The solvent with the highest oil recovery (xylene) and most suitable process (Ultrasonic Assisted Extrac
Pre-trained modelsfor the paper:Pre-training for Speech Translation: CTC Meets Optimal Transport. - MT models for MuST-C and CoVoST-2 - ASR and ST models for CoVoST (one-to-many and many-to-one)
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::6f798855bb196f0ebd09d27a0623e44e
Pre-trained models (ASR, MT, and ST) for the paper:Pre-training for Speech Translation: CTC Meets Optimal Transport. - ASR and ST models for MuST-C (En-De, En-Fr, and one-to-many)
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::337a82757b7813aaa25c601b9856756c
Direct speech-to-speech translation (S2ST) is among the most challenging problems in the translation paradigm due to the significant scarcity of S2ST data. While effort has been made to increase the data size from unlabeled speech by cascading pretra
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::dbab741318c730308c92f92f98b0cce9
http://arxiv.org/abs/2210.14514
http://arxiv.org/abs/2210.14514
Publikováno v:
AAAI
Almost all existing machine translation models are built on top of character-based vocabularies: characters, subwords or words. Rare characters from noisy text or character-rich languages such as Japanese and Chinese however can unnecessarily take up
Autor:
Yun Tang, Hongyu Gong, Ning Dong, Changhan Wang, Wei-Ning Hsu, Jiatao Gu, Alexei Baevski, Xian Li, Abdelrahman Mohamed, Michael Auli, Juan Pino
We describe a method to jointly pre-train speech and text in an encoder-decoder modeling framework for speech translation and recognition. The proposed method incorporates four self-supervised and supervised subtasks for cross modality learning. A se
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::95d825c6c89b5d6fb95c48e9eca1526c
Autor:
Ann Lee, Hongyu Gong, Paul-Ambroise Duquenne, Holger Schwenk, Peng-Jen Chen, Changhan Wang, Sravya Popuri, Yossi Adi, Juan Pino, Jiatao Gu, Wei-Ning Hsu
We present a textless speech-to-speech translation (S2ST) system that can translate speech from one language into another language and can be built without the need of any text data. Different from existing work in the literature, we tackle the chall
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::c3fee0052a03fe6596c1cb2fa7ca2654
http://arxiv.org/abs/2112.08352
http://arxiv.org/abs/2112.08352
Autor:
Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli
This paper presents XLS-R, a large-scale model for cross-lingual speech representation learning based on wav2vec 2.0. We train models with up to 2B parameters on nearly half a million hours of publicly available speech audio in 128 languages, an orde
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::d9d1740f72468c26efa5658360ae3194
http://arxiv.org/abs/2111.09296
http://arxiv.org/abs/2111.09296
Publikováno v:
Interspeech 2021.
Publikováno v:
ACL/IJCNLP (1)
Pretraining and multitask learning are widely used to improve the speech to text translation performance. In this study, we are interested in training a speech to text translation model along with an auxiliary text to text translation task. We conduc
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::2e36e98898d076880b00ba209a7218ac
http://arxiv.org/abs/2107.05782
http://arxiv.org/abs/2107.05782