Zobrazeno 1 - 10
of 22
pro vyhledávání: '"Yarmohammadi, Mahsa"'
Existing research suggests that automatic speech recognition (ASR) models can benefit from additional contexts (e.g., contact lists, user specified vocabulary). Rare words and named entities can be better recognized with contexts. In this work, we pr
Externí odkaz:
http://arxiv.org/abs/2407.10303
Autor:
Gantt, William, Behzad, Shabnam, An, Hannah YoungEun, Chen, Yunmo, White, Aaron Steven, Van Durme, Benjamin, Yarmohammadi, Mahsa
We introduce MultiMUC, the first multilingual parallel corpus for template filling, comprising translations of the classic MUC-4 template filling benchmark into five languages: Arabic, Chinese, Farsi, Korean, and Russian. We obtain automatic translat
Externí odkaz:
http://arxiv.org/abs/2401.16209
Autor:
Barham, Samuel, Weller, Orion, Yuan, Michelle, Murray, Kenton, Yarmohammadi, Mahsa, Jiang, Zhengping, Vashishtha, Siddharth, Martin, Alexander, Liu, Anqi, White, Aaron Steven, Boyd-Graber, Jordan, Van Durme, Benjamin
To foster the development of new models for collaborative AI-assisted report generation, we introduce MegaWika, consisting of 13 million Wikipedia articles in 50 diverse languages, along with their 71 million referenced source materials. We process t
Externí odkaz:
http://arxiv.org/abs/2307.07049
Existing multiparty dialogue datasets for entity coreference resolution are nascent, and many challenges are still unaddressed. We create a large-scale dataset, Multilingual Multiparty Coref (MMC), for this task based on TV transcripts. Due to the av
Externí odkaz:
http://arxiv.org/abs/2208.01307
Autor:
Yarmohammadi, Mahsa, Wu, Shijie, Marone, Marc, Xu, Haoran, Ebner, Seth, Qin, Guanghui, Chen, Yunmo, Guo, Jialiang, Harman, Craig, Murray, Kenton, White, Aaron Steven, Dredze, Mark, Van Durme, Benjamin
Zero-shot cross-lingual information extraction (IE) describes the construction of an IE model for some target language, given existing annotations exclusively in some other language, typically English. While the advance of pretrained multilingual enc
Externí odkaz:
http://arxiv.org/abs/2109.06798
Autor:
Xu, Haoran, Ebner, Seth, Yarmohammadi, Mahsa, White, Aaron Steven, Van Durme, Benjamin, Murray, Kenton
Publikováno v:
Adapt-NLP EACL 2021
Fine-tuning is known to improve NLP models by adapting an initial model trained on more plentiful but less domain-salient examples to data in a target domain. Such domain adaptation is typically done using one stage of fine-tuning. We demonstrate tha
Externí odkaz:
http://arxiv.org/abs/2103.02205
Copy mechanisms are employed in sequence to sequence models (seq2seq) to generate reproductions of words from the input to the output. These frameworks, operating at the lexical type level, fail to provide an explicit alignment that records where eac
Externí odkaz:
http://arxiv.org/abs/2010.15266
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Sproat, Richard1 rws@google.com, Yarmohammadi, Mahsa2 mahsa.yarmohamadi@gmail.com, Shafran, Izhak2 zakshafran@gmail.com, Roark, Brian1 roark@google.com
Publikováno v:
Computational Linguistics. Dec2014, Vol. 40 Issue 4, p733-761. 29p.