Zobrazeno 1 - 3
of 3
pro vyhledávání: '"Nazar, Nizi"'
Autor:
Dalvi, Fahim, Hasanain, Maram, Boughorbel, Sabri, Mousi, Basel, Abdaljalil, Samir, Nazar, Nizi, Abdelali, Ahmed, Chowdhury, Shammur Absar, Mubarak, Hamdy, Ali, Ahmed, Hawasly, Majd, Durrani, Nadir, Alam, Firoj
The recent development and success of Large Language Models (LLMs) necessitate an evaluation of their performance across diverse NLP tasks in different languages. Although several frameworks have been developed and made publicly available, their cust
Externí odkaz:
http://arxiv.org/abs/2308.04945
Autor:
Abdelali, Ahmed, Mubarak, Hamdy, Chowdhury, Shammur Absar, Hasanain, Maram, Mousi, Basel, Boughorbel, Sabri, Kheir, Yassine El, Izham, Daniel, Dalvi, Fahim, Hawasly, Majd, Nazar, Nizi, Elshahawy, Yousseif, Ali, Ahmed, Durrani, Nadir, Milic-Frayling, Natasa, Alam, Firoj
Recent advancements in Large Language Models (LLMs) have significantly influenced the landscape of language and speech research. Despite this progress, these models lack specific benchmarking against state-of-the-art (SOTA) models tailored to particu
Externí odkaz:
http://arxiv.org/abs/2305.14982
Autor:
Abdelali, Ahmed, Mubarak, Hamdy, Chowdhury, Shammur Absar, Hasanain, Maram, Mousi, Basel, Boughorbel, Sabri, Kheir, Yassine El, Izham, Daniel, Dalvi, Fahim, Hawasly, Majd, Nazar, Nizi, Elshahawy, Yousseif, Ali, Ahmed, Durrani, Nadir, Milic-Frayling, Natasa, Alam, Firoj
With large Foundation Models (FMs), language technologies (AI in general) are entering a new paradigm: eliminating the need for developing large-scale task-specific datasets and supporting a variety of tasks through set-ups ranging from zero-shot to
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::c91f80c3410b73015f1cb8b3587109e1