Zobrazeno 1 - 10
of 488
pro vyhledávání: '"Yaghoobzadeh, A."'
Autor:
Khoshtab, Paria, Namazifard, Danial, Masoudi, Mostafa, Akhgary, Ali, Sani, Samin Mahdizadeh, Yaghoobzadeh, Yadollah
This study addresses the gap in the literature concerning the comparative performance of LLMs in interpreting different types of figurative language across multiple languages. By evaluating LLMs using two multilingual datasets on simile and idiom int
Externí odkaz:
http://arxiv.org/abs/2410.16461
Autor:
Kamahi, Sepehr, Yaghoobzadeh, Yadollah
Despite the widespread adoption of autoregressive language models, explainability evaluation research has predominantly focused on span infilling and masked language models. Evaluating the faithfulness of an explanation method -- how accurately it ex
Externí odkaz:
http://arxiv.org/abs/2408.11252
Inspired by human cognition, Jiang et al.(2023c) create a benchmark for assessing LLMs' lateral thinking-thinking outside the box. Building upon this benchmark, we investigate how different prompting methods enhance LLMs' performance on this task to
Externí odkaz:
http://arxiv.org/abs/2404.02474
Autor:
Abaskohi, Amirhossein, Baruni, Sara, Masoudi, Mostafa, Abbasi, Nesa, Babalou, Mohammad Hadi, Edalat, Ali, Kamahi, Sepehr, Sani, Samin Mahdizadeh, Naghavian, Nikoo, Namazifard, Danial, Sadeghi, Pouya, Yaghoobzadeh, Yadollah
This paper explores the efficacy of large language models (LLMs) for Persian. While ChatGPT and consequent LLMs have shown remarkable performance in English, their efficiency for more low-resource languages remains an open question. We present the fi
Externí odkaz:
http://arxiv.org/abs/2404.02403
Autor:
İnce, Osman Batur, Zeraati, Tanin, Yagcioglu, Semih, Yaghoobzadeh, Yadollah, Erdem, Erkut, Erdem, Aykut
Neural networks have revolutionized language modeling and excelled in various downstream tasks. However, the extent to which these models achieve compositional generalization comparable to human cognitive abilities remains a topic of debate. While ex
Externí odkaz:
http://arxiv.org/abs/2310.12118
Autor:
Modarressi, Ali, Fayyaz, Mohsen, Aghazadeh, Ehsan, Yaghoobzadeh, Yadollah, Pilehvar, Mohammad Taher
An emerging solution for explaining Transformer-based models is to use vector-based analysis on how the representations are formed. However, providing a faithful vector-based explanation for a multi-layer model could be challenging in three aspects:
Externí odkaz:
http://arxiv.org/abs/2306.02873
In recent years, there has been significant progress in developing pre-trained language models for NLP. However, these models often struggle when fine-tuned on small datasets. To address this issue, researchers have proposed various adaptation approa
Externí odkaz:
http://arxiv.org/abs/2305.18169
Autor:
Salemi, Alireza, Abaskohi, Amirhossein, Tavakoli, Sara, Yaghoobzadeh, Yadollah, Shakery, Azadeh
Multilingual pre-training significantly improves many multilingual NLP tasks, including machine translation. Most existing methods are based on some variants of masked language modeling and text-denoising objectives on monolingual data. Multilingual
Externí odkaz:
http://arxiv.org/abs/2304.01282
Publikováno v:
مهندسی منابع آب, Vol 17, Iss 61, Pp 13-27 (2024)
Abstract Introduction: Numerous studies have shown that climate change will have a severe impact on water resources around the world. In the present research, we have tried to investigate the occurrence of drought in Shiraz region under the condition
Externí odkaz:
https://doaj.org/article/a8d126811ad447ca8676a2e13675d1bf
Autor:
Fayyaz, Mohsen, Aghazadeh, Ehsan, Modarressi, Ali, Pilehvar, Mohammad Taher, Yaghoobzadeh, Yadollah, Kahou, Samira Ebrahimi
Current pre-trained language models rely on large datasets for achieving state-of-the-art performance. However, past research has shown that not all examples in a dataset are equally important during training. In fact, it is sometimes possible to pru
Externí odkaz:
http://arxiv.org/abs/2211.05610