Zobrazeno 1 - 10
of 82
pro vyhledávání: '"Monojit Choudhury"'
Autor:
Anirudh Srinivasan, Gauri Kholkar, Rahul Kejriwal, Tanuja Ganu, Sandipan Dandapat, Sunayana Sitaram, Balakrishnan Santhanam, Somak Aditya, Kalika Bali, Monojit Choudhury
Publikováno v:
Proceedings of the AAAI Conference on Artificial Intelligence. 36:13227-13229
Pre-trained multilingual language models are gaining popularity due to their cross-lingual zero-shot transfer ability, but these models do not perform equally well in all languages. Evaluating task-specific performance of a model in a large number of
Publikováno v:
Proceedings of the International AAAI Conference on Web and Social Media. 15:1054-1058
Conversations on polarization are increasingly central to discussions of politics and society, but the schisms between parties and states can be hard to identify systematically in what politicians say. In this paper, we demonstrate the use of represe
Autor:
Monojit Choudhury, Amit Deshpande
Publikováno v:
Proceedings of the AAAI Conference on Artificial Intelligence. 35:12710-12718
Massively multilingual pre-trained language models, such as mBERT and XLM-RoBERTa, have received significant attention in the recent NLP literature for their excellent capability towards crosslingual zero-shot transfer of NLP tasks. This is especiall
Publikováno v:
ACM SIGCAS/SIGCHI Conference on Computing and Sustainable Societies (COMPASS).
Borrowing ideas from {\em Production functions} in micro-economics, in this paper we introduce a framework to systematically evaluate the performance and cost trade-offs between machine-translated and manually-created labelled data for task-specific
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::38cc51d358d2c8b1bb71783ebfb19657
http://arxiv.org/abs/2205.06350
http://arxiv.org/abs/2205.06350
Massively Multilingual Transformer based Language Models have been observed to be surprisingly effective on zero-shot transfer across languages, though the performance varies from language to language depending on the pivot language(s) used for fine-
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::a120cc3d46f47827cfbb8b16786755c2
http://arxiv.org/abs/2205.06130
http://arxiv.org/abs/2205.06130
Few-shot transfer often shows substantial gain over zero-shot transfer~\cite{lauscher2020zero}, which is a practically useful trade-off between fully supervised and unsupervised learning approaches for multilingual pretrained model-based systems. Thi
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::2f467bd5e5fb8cdc96c5184d8f9b5b47
Publikováno v:
Findings of the Association for Computational Linguistics: ACL 2022.
Publikováno v:
Proceedings of the ACM on Human-Computer Interaction. 4:1-23
Despite their pervasiveness, current text-based conversational agents (chatbots) are predominantly monolingual, while users are often multilingual. It is well-known that multilingual users mix languages while interacting with others, as well as in th
Publikováno v:
Proceedings of the ACM on Human-Computer Interaction. 4:1-14
We studied the topical preferences of social media campaigns of India's two main political parties by examining the tweets of 7382 politicians during the key phase of campaigning between Jan - May of 2019 in the run up to the 2019 general election. F