Zobrazeno 1 - 10
of 3 281
pro vyhledávání: '"Crosslingual"'
Autor:
Liu, Yihong, Wang, Mingyang, Kargaran, Amir Hossein, Imani, Ayyoob, Xhelili, Orgest, Ye, Haotian, Ma, Chunlan, Yvon, François, Schütze, Hinrich
Recent studies have shown that post-aligning multilingual pretrained language models (mPLMs) using alignment objectives on both original and transliterated data can improve crosslingual alignment. This improvement further leads to better crosslingual
Externí odkaz:
http://arxiv.org/abs/2409.17326
Autor:
Arase, Yuki, Kajiwara, Tomoyuki
In this study, we propose a method that distils representations of word meaning in context from a pre-trained masked language model in both monolingual and crosslingual settings. Word representations are the basis for context-aware lexical semantics
Externí odkaz:
http://arxiv.org/abs/2409.08719
Autor:
Chua, Lynn, Ghazi, Badih, Huang, Yangsibo, Kamath, Pritish, Kumar, Ravi, Manurangsi, Pasin, Sinha, Amer, Xie, Chulin, Zhang, Chiyuan
Large language models (LLMs) are typically multilingual due to pretraining on diverse multilingual corpora. But can these models relate corresponding concepts across languages, effectively being crosslingual? This study evaluates six state-of-the-art
Externí odkaz:
http://arxiv.org/abs/2406.16135
There exist three approaches for multilingual and crosslingual automatic speech recognition (MCL-ASR) - supervised pre-training with phonetic or graphemic transcription, and self-supervised pre-training. We find that pre-training with phonetic superv
Externí odkaz:
http://arxiv.org/abs/2406.02166
Large language models (LLMs) are very proficient text generators. We leverage this capability of LLMs to generate task-specific data via zero-shot prompting and promote cross-lingual transfer for low-resource target languages. Given task-specific dat
Externí odkaz:
http://arxiv.org/abs/2407.10582
This paper presents CrossVoice, a novel cascade-based Speech-to-Speech Translation (S2ST) system employing advanced ASR, MT, and TTS technologies with cross-lingual prosody preservation through transfer learning. We conducted comprehensive experiment
Externí odkaz:
http://arxiv.org/abs/2406.00021
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Sanchez-Bayona, Elisa, Agerri, Rodrigo
Metaphors, although occasionally unperceived, are ubiquitous in our everyday language. Thus, it is crucial for Language Models to be able to grasp the underlying meaning of this kind of figurative language. In this work, we present Meta4XNLI, a novel
Externí odkaz:
http://arxiv.org/abs/2404.07053
Transformer-based pre-trained language models (PLMs) have achieved remarkable performance in various natural language processing (NLP) tasks. However, pre-training such models can take considerable resources that are almost only available to high-res
Externí odkaz:
http://arxiv.org/abs/2401.04821
The remarkable ability of Large Language Models (LLMs) to understand and follow instructions has sometimes been limited by their in-context learning (ICL) performance in low-resource languages. To address this, we introduce a novel approach that leve
Externí odkaz:
http://arxiv.org/abs/2311.06595