Zobrazeno 1 - 10
of 592
pro vyhledávání: '"ZHANG, CHIYU"'
Autor:
Zhang, Bo-Wen, Qiu, Xi-Yang, Ma, Yicheng, Hu, Qingmei, Fitó-Parera, Aina, Kohata, Ikuma, Feng, Ya, Zheng, Yongjia, Zhang, Chiyu, Matsuo, Yutaka, Wang, YuHuang, Chiashi, Shohei, Otsuka, Keigo, Xiang, Rong, Levshov, Dmitry I., Cambré, Sofie, Wenseleers, Wim, Rotkin, Slava V., Maruyama, Shigeo
Carbyne, a one-dimensional (1D) carbon allotrope with alternating triple and single bonds, has the highest known mechanical strength but is unstable to bending, limiting synthesis to short linear chains. Encapsulation within carbon nanotubes (CNTs) s
Externí odkaz:
http://arxiv.org/abs/2411.18899
Adversarial attacks, which manipulate input data to undermine model availability and integrity, pose significant security threats during machine learning inference. With the advent of Large Vision-Language Models (LVLMs), new attack vectors, such as
Externí odkaz:
http://arxiv.org/abs/2410.23687
Autor:
Abdul-Mageed, Muhammad, Keleg, Amr, Elmadany, AbdelRahim, Zhang, Chiyu, Hamed, Injy, Magdy, Walid, Bouamor, Houda, Habash, Nizar
We describe the findings of the fifth Nuanced Arabic Dialect Identification Shared Task (NADI 2024). NADI's objective is to help advance SoTA Arabic NLP by providing guidance, datasets, modeling opportunities, and standardized evaluation conditions t
Externí odkaz:
http://arxiv.org/abs/2407.04910
Autor:
Zhang, Chiyu, Sun, Yifei, Wu, Minghao, Chen, Jun, Lei, Jie, Abdul-Mageed, Muhammad, Jin, Rong, Liu, Angli, Zhu, Ji, Park, Sem, Yao, Ning, Long, Bo
Content-based recommendation systems play a crucial role in delivering personalized content to users in the digital world. In this work, we introduce EmbSum, a novel framework that enables offline pre-computations of users and candidate items while c
Externí odkaz:
http://arxiv.org/abs/2405.11441
Text Style Transfer (TST) seeks to alter the style of text while retaining its core content. Given the constraints of limited parallel datasets for TST, we propose CoTeX, a framework that leverages large language models (LLMs) alongside chain-of-thou
Externí odkaz:
http://arxiv.org/abs/2403.01106
Mitigating biases in machine learning models has become an increasing concern in Natural Language Processing (NLP), particularly in developing fair text embeddings, which are crucial yet challenging for real-world applications like search engines. In
Externí odkaz:
http://arxiv.org/abs/2402.14208
Autor:
Zhang, Chiyu, Sun, Yifei, Chen, Jun, Lei, Jie, Abdul-Mageed, Muhammad, Wang, Sinong, Jin, Rong, Park, Sem, Yao, Ning, Long, Bo
Leveraging users' long engagement histories is essential for personalized content recommendations. The success of pretrained language models (PLMs) in NLP has led to their use in encoding user histories and candidate items, framing content recommenda
Externí odkaz:
http://arxiv.org/abs/2402.10555
Autor:
Wang, Renxi, Li, Haonan, Wu, Minghao, Wang, Yuxia, Han, Xudong, Zhang, Chiyu, Baldwin, Timothy
Instruction tuning significantly enhances the performance of large language models (LLMs) across various tasks. However, the procedure to optimizing the mixing of instruction datasets for LLM fine-tuning is still poorly understood. This study categor
Externí odkaz:
http://arxiv.org/abs/2312.10793
Autor:
Abdul-Mageed, Muhammad, Elmadany, AbdelRahim, Zhang, Chiyu, Nagoudi, El Moatez Billah, Bouamor, Houda, Habash, Nizar
We describe the findings of the fourth Nuanced Arabic Dialect Identification Shared Task (NADI 2023). The objective of NADI is to help advance state-of-the-art Arabic NLP by creating opportunities for teams of researchers to collaboratively compete u
Externí odkaz:
http://arxiv.org/abs/2310.16117
Instruction tuned large language models (LLMs), such as ChatGPT, demonstrate remarkable performance in a wide range of tasks. Despite numerous recent studies that examine the performance of instruction-tuned LLMs on various NLP benchmarks, there rema
Externí odkaz:
http://arxiv.org/abs/2310.14557