Zobrazeno 1 - 10
of 5 575
pro vyhledávání: '"A, Kaddour"'
The inference demand for LLMs has skyrocketed in recent months, and serving models with low latencies remains challenging due to the quadratic input length complexity of the attention layers. In this work, we investigate the effect of dropping MLP an
Externí odkaz:
http://arxiv.org/abs/2407.15516
Autor:
Zhuo, Terry Yue, Vu, Minh Chien, Chim, Jenny, Hu, Han, Yu, Wenhao, Widyasari, Ratnadira, Yusuf, Imam Nur Bani, Zhan, Haolan, He, Junda, Paul, Indraneil, Brunner, Simon, Gong, Chen, Hoang, Thong, Zebaze, Armel Randy, Hong, Xiaoheng, Li, Wen-Ding, Kaddour, Jean, Xu, Ming, Zhang, Zhihan, Yadav, Prateek, Jain, Naman, Gu, Alex, Cheng, Zhoujun, Liu, Jiawei, Liu, Qian, Wang, Zijian, Lo, David, Hui, Binyuan, Muennighoff, Niklas, Fried, Daniel, Du, Xiaoning, de Vries, Harm, Von Werra, Leandro
Task automation has been greatly empowered by the recent advances in Large Language Models (LLMs) via Python code, where the tasks ranging from software engineering development to general-purpose reasoning. While current benchmarks have shown that LL
Externí odkaz:
http://arxiv.org/abs/2406.15877
Autor:
Gema, Aryo Pradipta, Leang, Joshua Ong Jun, Hong, Giwon, Devoto, Alessio, Mancino, Alberto Carlo Maria, Saxena, Rohit, He, Xuanli, Zhao, Yu, Du, Xiaotang, Madani, Mohammad Reza Ghasemi, Barale, Claire, McHardy, Robert, Harris, Joshua, Kaddour, Jean, van Krieken, Emile, Minervini, Pasquale
Maybe not. We identify and analyse errors in the popular Massive Multitask Language Understanding (MMLU) benchmark. Even though MMLU is widely adopted, our analysis demonstrates numerous ground truth errors that obscure the true capabilities of LLMs.
Externí odkaz:
http://arxiv.org/abs/2406.04127
The primary objective of non-intrusive load monitoring (NILM) techniques is to monitor and track power consumption within residential buildings. This is achieved by approximating the consumption of each individual appliance from the aggregate energy
Externí odkaz:
http://arxiv.org/abs/2402.17809
Autor:
Kaddour, Jean, Liu, Qi
The in-context learning ability of large language models (LLMs) enables them to generalize to novel downstream tasks with relatively few labeled examples. However, they require enormous computational resources to be deployed. Alternatively, smaller m
Externí odkaz:
http://arxiv.org/abs/2310.01119
Publikováno v:
Ziglôbitha, Vol 02, Iss 012, Pp 297-304 (2024)
Abstract : The research presents a comparison between Malek Bennabi's The Conditions of Renaissance and Edward Said's Orientalism, highlighting the influence of the former on the latter in analyzing the relationship between the colonizer and the colo
Externí odkaz:
https://doaj.org/article/3d29cbd024024434b46a493c05a1aa28
Autor:
Nezar Cherrada, Ahmed Elkhalifa Chemsa, Noura Gheraissa, Abdelmalek Zaater, Bilal Benamor, Ahmed Ghania, Bouras Yassine, Abdelbasset Kaddour, Muhammad Afzaal, Aasma Asghar, Farhan Saeed, Degnet Teferi Asres
Publikováno v:
International Journal of Food Properties, Vol 27, Iss 1, Pp 194-213 (2024)
Diabetes is a chronic disease that has affected millions of people worldwide. The current treatments for diabetes, such as insulin therapy and oral medications, have limitations, including adverse effects and high costs. Therefore, there is a need fo
Externí odkaz:
https://doaj.org/article/8495d6cb6bc242f7841b52b354009bc6
Autor:
Kaddour N, Benyettou F, Moulai K, Mebarki A, Ghemrawi R, Amir ZC, Merzouk H, Trabolsi A, Mokhtari-Soulimane NA
Publikováno v:
International Journal of Nanomedicine, Vol Volume 19, Pp 10961-10981 (2024)
Nawel Kaddour,1 Farah Benyettou,2 Kawtar Moulai,1 Abdelouahab Mebarki,1 Rose Ghemrawi,3,4 Zine-Charaf Amir,5 Hafida Merzouk,1 Ali Trabolsi,2 Nassima Amel Mokhtari-Soulimane1 1Laboratory of Physiology, Physiopathology, and Biochemistry of Nutrition, D
Externí odkaz:
https://doaj.org/article/3e60a23c644b44fdbdb70928a729f6c5
Autor:
Kaddour, Jean, Harris, Joshua, Mozes, Maximilian, Bradley, Herbie, Raileanu, Roberta, McHardy, Robert
Large Language Models (LLMs) went from non-existent to ubiquitous in the machine learning discourse within a few years. Due to the fast pace of the field, it is difficult to identify the remaining challenges and already fruitful application areas. In
Externí odkaz:
http://arxiv.org/abs/2307.10169
The computation necessary for training Transformer-based language models has skyrocketed in recent years. This trend has motivated research on efficient training algorithms designed to improve training, validation, and downstream performance faster t
Externí odkaz:
http://arxiv.org/abs/2307.06440