Zobrazeno 1 - 10
of 507
pro vyhledávání: '"Xu, Yuhui"'
Autor:
Wang, Lei, Dong, Shan, Xu, Yuhui, Dong, Hanze, Wang, Yalu, Saha, Amrita, Lim, Ee-Peng, Xiong, Caiming, Sahoo, Doyen
Recent large language models (LLMs) have demonstrated versatile capabilities in long-context scenarios. Although some recent benchmarks have been developed to evaluate the long-context capabilities of LLMs, there is a lack of benchmarks evaluating th
Externí odkaz:
http://arxiv.org/abs/2410.04698
Autor:
Xu, Yuhui, Jie, Zhanming, Dong, Hanze, Wang, Lei, Lu, Xudong, Zhou, Aojun, Saha, Amrita, Xiong, Caiming, Sahoo, Doyen
Large Language Models (LLMs) have revolutionized the field of natural language processing, achieving unprecedented performance across a variety of applications. However, their increased computational and memory demands present significant challenges,
Externí odkaz:
http://arxiv.org/abs/2407.21018
Autor:
Alizadeh, Fatemeh, Randall, Dave, Tolmie, Peter, Lee, Minha, Xu, Yuhui, Mennicken, Sarah, Woźniak, Mikołaj P., Paul, Dennis, Pins, Dominik
Publikováno v:
ECSCW 2024: the 22nd European Conference on Computer-Supported Cooperative Work
The evolution of smart home technologies, particularly agentic ones such as conversational agents, robots, and virtual avatars, is reshaping our understanding of home and domestic life. This shift highlights the complexities of modern domestic life,
Externí odkaz:
http://arxiv.org/abs/2407.15956
Large Language Models (LLMs) have advanced rapidly but face significant memory demands. While quantization has shown promise for LLMs, current methods typically require lengthy training to alleviate the performance degradation from quantization loss.
Externí odkaz:
http://arxiv.org/abs/2405.20202
Large Language Models (LLMs) have become pivotal in advancing the field of artificial intelligence, yet their immense sizes pose significant challenges for both fine-tuning and deployment. Current post-training pruning methods, while reducing the siz
Externí odkaz:
http://arxiv.org/abs/2405.16057
Autor:
Lu, Xudong, Zhou, Aojun, Lin, Ziyi, Liu, Qi, Xu, Yuhui, Zhang, Renrui, Wen, Yafei, Ren, Shuai, Gao, Peng, Yan, Junchi, Li, Hongsheng
Recent developments in large-scale pre-trained text-to-image diffusion models have significantly improved the generation of high-fidelity images, particularly with the emergence of diffusion models based on transformer architecture (DiTs). Among thes
Externí odkaz:
http://arxiv.org/abs/2405.14854
Autor:
Lu, Xudong, Liu, Qi, Xu, Yuhui, Zhou, Aojun, Huang, Siyuan, Zhang, Bo, Yan, Junchi, Li, Hongsheng
A pivotal advancement in the progress of large language models (LLMs) is the emergence of the Mixture-of-Experts (MoE) LLMs. Compared to traditional LLMs, MoE LLMs can achieve higher performance with fewer parameters, but it is still hard to deploy t
Externí odkaz:
http://arxiv.org/abs/2402.14800
Autor:
Xu, Yuhui, Xie, Lingxi, Gu, Xiaotao, Chen, Xin, Chang, Heng, Zhang, Hengheng, Chen, Zhengsu, Zhang, Xiaopeng, Tian, Qi
Recently years have witnessed a rapid development of large language models (LLMs). Despite the strong ability in many language-understanding tasks, the heavy computational burden largely restricts the application of LLMs especially when one needs to
Externí odkaz:
http://arxiv.org/abs/2309.14717
Autor:
Xu, Yuhui, Wei, Yiqiu, Shi, Zhuowei, Yin, Feiyue, Zhu, Qiyuan, Luo, Dan, Tang, Yang, Wang, Huajiao, Yan, Zichun, Feng, Jinzhou, Li, Yongmei
Publikováno v:
In Journal of Neuroimmunology 15 November 2024 396
Autor:
Luo, Honglin, Zhang, Yongde, Liu, Fuyan, Zhao, Yongzhen, Peng, Jinxia, Xu, Yuhui, Chen, Xiuli, Huang, Yin, Ji, Changmian, Liu, Qingyun, He, Pingping, Feng, Pengfei, Yang, Chunling, Wei, Pinyuan, Ma, Zhenhua, Qin, Jianguang, Zhou, Shengjie, Dai, Shiming, Zhang, Yaoyao, Zhao, Zhongquan, Liu, Hongling, Zheng, Hongkun, Zhang, Jisen, Lin, Yong, Chen, Xiaohan
Publikováno v:
In Journal of Advanced Research November 2024 65:1-17