Zobrazeno 1 - 8
of 8
pro vyhledávání: '"Bibi, Khalil"'
Autor:
Dehghan, Mohammad, Alomrani, Mohammad Ali, Bagga, Sunyam, Alfonso-Hermelo, David, Bibi, Khalil, Ghaddar, Abbas, Zhang, Yingxue, Li, Xiaoguang, Hao, Jianye, Liu, Qun, Lin, Jimmy, Chen, Boxing, Parthasarathi, Prasanna, Biparva, Mahdi, Rezagholizadeh, Mehdi
The emerging citation-based QA systems are gaining more attention especially in generative AI search applications. The importance of extracted knowledge provided to these systems is vital from both accuracy (completeness of information) and efficienc
Externí odkaz:
http://arxiv.org/abs/2406.10393
Deep neural networks (DNN) have achieved impressive success in multiple domains. Over the years, the accuracy of these models has increased with the proliferation of deeper and more complex architectures. Thus, state-of-the-art solutions are often co
Externí odkaz:
http://arxiv.org/abs/2207.07497
Autor:
Ghaddar, Abbas, Wu, Yimeng, Bagga, Sunyam, Rashid, Ahmad, Bibi, Khalil, Rezagholizadeh, Mehdi, Xing, Chao, Wang, Yasheng, Xinyu, Duan, Wang, Zhefeng, Huai, Baoxing, Jiang, Xin, Liu, Qun, Langlais, Philippe
There is a growing body of work in recent years to develop pre-trained language models (PLMs) for the Arabic language. This work concerns addressing two major problems in existing Arabic PLMs which constraint progress of the Arabic NLU and NLG fields
Externí odkaz:
http://arxiv.org/abs/2205.10687
Autor:
Haidar, Md Akmal, Rezagholizadeh, Mehdi, Ghaddar, Abbas, Bibi, Khalil, Langlais, Philippe, Poupart, Pascal
Knowledge distillation (KD) is an efficient framework for compressing large-scale pre-trained language models. Recent years have seen a surge of research aiming to improve KD by leveraging Contrastive Learning, Intermediate Layer Distillation, Data A
Externí odkaz:
http://arxiv.org/abs/2204.07674
Autor:
Ghaddar, Abbas, Wu, Yimeng, Rashid, Ahmad, Bibi, Khalil, Rezagholizadeh, Mehdi, Xing, Chao, Wang, Yasheng, Xinyu, Duan, Wang, Zhefeng, Huai, Baoxing, Jiang, Xin, Liu, Qun, Langlais, Philippe
Language-specific pre-trained models have proven to be more accurate than multilingual ones in a monolingual evaluation setting, Arabic is no exception. However, we found that previously released Arabic BERT models were significantly under-trained. I
Externí odkaz:
http://arxiv.org/abs/2112.04329
Autor:
Bibi, Khalil
La détection de la paternité textuelle est un domaine de recherche qui existe depuis les années 1960. Il consiste à prédire l’auteur d’un texte en se basant sur d’autres textes dont les auteurs sont connus. Pour faire cela, plusieurs trait
Externí odkaz:
http://hdl.handle.net/1866/24308
Autor:
Bhardwaj, Shivendra, Ghaddar, Abbas, Rashid, Ahmad, Bibi, Khalil, Li, Chengyang, Ghodsi, Ali, Langlais, Philippe, Rezagholizadeh, Mehdi
Knowledge Distillation (KD) is extensively used to compress and deploy large pre-trained language models on edge devices for real-world applications. However, one neglected area of research is the impact of noisy (corrupted) labels on KD. We present,
Externí odkaz:
http://arxiv.org/abs/2109.10147
Publikováno v:
Journal of Rawalpindi Medical College, Vol 26, Iss 3 (2022)
Introduction: Removal of a kidney is a common surgical procedure for a longtime. The procedure was traditionally done by open surgery. Since the advent of laparoscopic surgery, nephrectomy is being done increasingly laparoscopically. The laparoscopic
Externí odkaz:
https://doaj.org/article/4310c46527f94bd085f6fd65a0bb0e79