Zobrazeno 1 - 10
of 494
pro vyhledávání: '"A Yaghoobzadeh"'
Autor:
Sani, Samin Mahdizadeh, Sadeghi, Pouya, Vu, Thuy-Trang, Yaghoobzadeh, Yadollah, Haffari, Gholamreza
Large language models (LLMs) have made great progress in classification and text generation tasks. However, they are mainly trained on English data and often struggle with low-resource languages. In this study, we explore adding a new language, i.e.,
Externí odkaz:
http://arxiv.org/abs/2412.13375
Large language models (LLMs) have shown superior capabilities in translating figurative language compared to neural machine translation (NMT) systems. However, the impact of different prompting methods and LLM-NMT combinations on idiom translation ha
Externí odkaz:
http://arxiv.org/abs/2412.09993
Autor:
Khoshtab, Paria, Namazifard, Danial, Masoudi, Mostafa, Akhgary, Ali, Sani, Samin Mahdizadeh, Yaghoobzadeh, Yadollah
This study addresses the gap in the literature concerning the comparative performance of LLMs in interpreting different types of figurative language across multiple languages. By evaluating LLMs using two multilingual datasets on simile and idiom int
Externí odkaz:
http://arxiv.org/abs/2410.16461
Autor:
Kamahi, Sepehr, Yaghoobzadeh, Yadollah
Despite the widespread adoption of autoregressive language models, explainability evaluation research has predominantly focused on span infilling and masked language models. Evaluating the faithfulness of an explanation method -- how accurately it ex
Externí odkaz:
http://arxiv.org/abs/2408.11252
Inspired by human cognition, Jiang et al.(2023c) create a benchmark for assessing LLMs' lateral thinking-thinking outside the box. Building upon this benchmark, we investigate how different prompting methods enhance LLMs' performance on this task to
Externí odkaz:
http://arxiv.org/abs/2404.02474
Autor:
Abaskohi, Amirhossein, Baruni, Sara, Masoudi, Mostafa, Abbasi, Nesa, Babalou, Mohammad Hadi, Edalat, Ali, Kamahi, Sepehr, Sani, Samin Mahdizadeh, Naghavian, Nikoo, Namazifard, Danial, Sadeghi, Pouya, Yaghoobzadeh, Yadollah
This paper explores the efficacy of large language models (LLMs) for Persian. While ChatGPT and consequent LLMs have shown remarkable performance in English, their efficiency for more low-resource languages remains an open question. We present the fi
Externí odkaz:
http://arxiv.org/abs/2404.02403
Publikováno v:
مجله اپیدمیولوژی ایران, Vol 12, Iss 2, Pp 18-31 (2016)
Background and Objectives: Infertility is one of the most important crises in the lives of couples, which create psychological consequences beside the economic, social and individual problems,. Decreased sexual satisfaction is one of the consequences
Externí odkaz:
https://doaj.org/article/2537012f36494643a8735c940e15ffd0
Autor:
İnce, Osman Batur, Zeraati, Tanin, Yagcioglu, Semih, Yaghoobzadeh, Yadollah, Erdem, Erkut, Erdem, Aykut
Neural networks have revolutionized language modeling and excelled in various downstream tasks. However, the extent to which these models achieve compositional generalization comparable to human cognitive abilities remains a topic of debate. While ex
Externí odkaz:
http://arxiv.org/abs/2310.12118
Autor:
Modarressi, Ali, Fayyaz, Mohsen, Aghazadeh, Ehsan, Yaghoobzadeh, Yadollah, Pilehvar, Mohammad Taher
An emerging solution for explaining Transformer-based models is to use vector-based analysis on how the representations are formed. However, providing a faithful vector-based explanation for a multi-layer model could be challenging in three aspects:
Externí odkaz:
http://arxiv.org/abs/2306.02873
In recent years, there has been significant progress in developing pre-trained language models for NLP. However, these models often struggle when fine-tuned on small datasets. To address this issue, researchers have proposed various adaptation approa
Externí odkaz:
http://arxiv.org/abs/2305.18169