Zobrazeno 1 - 10
of 39
pro vyhledávání: '"Xie, Chulin"'
Autor:
Li, Qinbin, Hong, Junyuan, Xie, Chulin, Tan, Jeffrey, Xin, Rachel, Hou, Junyi, Yin, Xavier, Wang, Zhun, Hendrycks, Dan, Wang, Zhangyang, Li, Bo, He, Bingsheng, Song, Dawn
Large Language Models (LLMs) have become integral to numerous domains, significantly advancing applications in data management, mining, and analysis. Their profound capabilities in processing and interpreting complex language data, however, bring to
Externí odkaz:
http://arxiv.org/abs/2408.12787
Autor:
Chua, Lynn, Ghazi, Badih, Huang, Yangsibo, Kamath, Pritish, Kumar, Ravi, Manurangsi, Pasin, Sinha, Amer, Xie, Chulin, Zhang, Chiyuan
Large language models (LLMs) are typically multilingual due to pretraining on diverse multilingual corpora. But can these models relate corresponding concepts across languages, effectively being crosslingual? This study evaluates six state-of-the-art
Externí odkaz:
http://arxiv.org/abs/2406.16135
Autor:
Xiang, Zhen, Zheng, Linzhi, Li, Yanjie, Hong, Junyuan, Li, Qinbin, Xie, Han, Zhang, Jiawei, Xiong, Zidi, Xie, Chulin, Yang, Carl, Song, Dawn, Li, Bo
The rapid advancement of large language models (LLMs) has catalyzed the deployment of LLM-powered agents across numerous applications, raising new concerns regarding their safety and trustworthiness. Existing methods for enhancing the safety of LLMs
Externí odkaz:
http://arxiv.org/abs/2406.09187
Autor:
Jin, Bowen, Xie, Chulin, Zhang, Jiawei, Roy, Kashob Kumar, Zhang, Yu, Li, Zheng, Li, Ruirui, Tang, Xianfeng, Wang, Suhang, Meng, Yu, Han, Jiawei
Publikováno v:
ACL 2024
Large language models (LLMs), while exhibiting exceptional performance, suffer from hallucinations, especially on knowledge-intensive tasks. Existing works propose to augment LLMs with individual text units retrieved from external knowledge corpora t
Externí odkaz:
http://arxiv.org/abs/2404.07103
Standard federated learning approaches suffer when client data distributions have sufficient heterogeneity. Recent methods addressed the client data heterogeneity issue via personalized federated learning (PFL) - a class of FL algorithms aiming to pe
Externí odkaz:
http://arxiv.org/abs/2404.02478
Autor:
Xu, Lijie, Xie, Chulin, Guo, Yiran, Alonso, Gustavo, Li, Bo, Li, Guoliang, Wang, Wei, Wu, Wentao, Zhang, Ce
Current federated learning (FL) approaches view decentralized training data as a single table, divided among participants either horizontally (by rows) or vertically (by columns). However, these approaches are inadequate for handling distributed rela
Externí odkaz:
http://arxiv.org/abs/2403.15839
Autor:
Hong, Junyuan, Duan, Jinhao, Zhang, Chenhui, Li, Zhangheng, Xie, Chulin, Lieberman, Kelsey, Diffenderfer, James, Bartoldson, Brian, Jaiswal, Ajay, Xu, Kaidi, Kailkhura, Bhavya, Hendrycks, Dan, Song, Dawn, Wang, Zhangyang, Li, Bo
Compressing high-capability Large Language Models (LLMs) has emerged as a favored strategy for resource-efficient inferences. While state-of-the-art (SoTA) compression methods boast impressive advancements in preserving benign task performance, the p
Externí odkaz:
http://arxiv.org/abs/2403.15447
Autor:
Xie, Chulin, Lin, Zinan, Backurs, Arturs, Gopi, Sivakanth, Yu, Da, Inan, Huseyin A, Nori, Harsha, Jiang, Haotian, Zhang, Huishuai, Lee, Yin Tat, Li, Bo, Yekhanin, Sergey
Text data has become extremely valuable due to the emergence of machine learning algorithms that learn from it. A lot of high-quality text data generated in the real world is private and therefore cannot be shared or used freely due to privacy concer
Externí odkaz:
http://arxiv.org/abs/2403.01749
Autor:
Li, Qinbin, Xie, Chulin, Xu, Xiaojun, Liu, Xiaoyuan, Zhang, Ce, Li, Bo, He, Bingsheng, Song, Dawn
Federated learning has emerged as a promising distributed learning paradigm that facilitates collaborative learning among multiple parties without transferring raw data. However, most existing federated learning studies focus on either horizontal or
Externí odkaz:
http://arxiv.org/abs/2310.11865
Autor:
Tsai, Yu-Lin, Hsu, Chia-Yi, Xie, Chulin, Lin, Chih-Hsun, Chen, Jia-You, Li, Bo, Chen, Pin-Yu, Yu, Chia-Mu, Huang, Chun-Ying
Diffusion models for text-to-image (T2I) synthesis, such as Stable Diffusion (SD), have recently demonstrated exceptional capabilities for generating high-quality content. However, this progress has raised several concerns of potential misuse, partic
Externí odkaz:
http://arxiv.org/abs/2310.10012