Zobrazeno 1 - 10
of 472
pro vyhledávání: '"Lu, XuDong"'
Autor:
Zhang, Linhao, Zan, Daoguang, Yang, Quanshun, Huang, Zhirong, Chen, Dong, Shen, Bo, Liu, Tianyu, Gong, Yongshun, Huang, Pengjie, Lu, Xudong, Liang, Guangtai, Cui, Lizhen, Wang, Qianxiang
Large Language Models (LLMs) have advanced rapidly in recent years, with their applications in software engineering expanding to more complex repository-level tasks. GitHub issue resolving is a key challenge among these tasks. While recent approaches
Externí odkaz:
http://arxiv.org/abs/2412.17315
Autor:
Lu, Xudong, Chen, Yinghao, Chen, Cheng, Tan, Hui, Chen, Boheng, Xie, Yina, Hu, Rui, Tan, Guanxin, Wu, Renshou, Hu, Yan, Zeng, Yi, Wu, Lei, Bian, Liuyang, Wang, Zhaoxiong, Liu, Long, Yang, Yanzhou, Xiao, Han, Zhou, Aojun, Wen, Yafei, Chen, Xiaoxin, Ren, Shuai, Li, Hongsheng
The emergence and growing popularity of multimodal large language models (MLLMs) have significant potential to enhance various aspects of daily life, from improving communication to facilitating learning and problem-solving. Mobile phones, as essenti
Externí odkaz:
http://arxiv.org/abs/2411.10640
Autor:
Xu, Yuhui, Jie, Zhanming, Dong, Hanze, Wang, Lei, Lu, Xudong, Zhou, Aojun, Saha, Amrita, Xiong, Caiming, Sahoo, Doyen
Large Language Models (LLMs) have revolutionized the field of natural language processing, achieving unprecedented performance across a variety of applications. However, their increased computational and memory demands present significant challenges,
Externí odkaz:
http://arxiv.org/abs/2407.21018
Intelligence is key to advancing integrated circuit (IC) fabrication. Recent breakthroughs in Large Multimodal Models (LMMs) have unlocked unparalleled abilities in understanding images and text, fostering intelligent fabrication. Leveraging the powe
Externí odkaz:
http://arxiv.org/abs/2407.10810
Large Language Models (LLMs) have become pivotal in advancing the field of artificial intelligence, yet their immense sizes pose significant challenges for both fine-tuning and deployment. Current post-training pruning methods, while reducing the siz
Externí odkaz:
http://arxiv.org/abs/2405.16057
Autor:
Lu, Xudong, Zhou, Aojun, Lin, Ziyi, Liu, Qi, Xu, Yuhui, Zhang, Renrui, Wen, Yafei, Ren, Shuai, Gao, Peng, Yan, Junchi, Li, Hongsheng
Recent developments in large-scale pre-trained text-to-image diffusion models have significantly improved the generation of high-fidelity images, particularly with the emergence of diffusion models based on transformer architecture (DiTs). Among thes
Externí odkaz:
http://arxiv.org/abs/2405.14854
Autor:
Nan, Shan, Tang, Tianhua, Feng, Hongshuo, Wang, Yijie, Li, Mengyang, Lu, Xudong, Duan, Huilong
Publikováno v:
JMIR Medical Informatics, Vol 8, Iss 10, p e21628 (2020)
BackgroundCOVID-19 is a global pandemic that is affecting more than 200 countries worldwide. Efficient diagnosis and treatment are crucial to combat the disease. Computer-interpretable guidelines (CIGs) can aid the broad global adoption of evidence-b
Externí odkaz:
https://doaj.org/article/57c7043ef08947d0b47a0f04ad7022dd
Autor:
Li, Mengyang, Leslie, Heather, Qi, Bin, Nan, Shan, Feng, Hongshuo, Cai, Hailing, Lu, Xudong, Duan, Huilong
Publikováno v:
Journal of Medical Internet Research, Vol 22, Iss 6, p e20239 (2020)
BackgroundThe coronavirus disease (COVID-19) was discovered in China in December 2019. It has developed into a threatening international public health emergency. With the exception of China, the number of cases continues to increase worldwide. A numb
Externí odkaz:
https://doaj.org/article/21041656c59a4856b97d086ec7c8e7f7
Autor:
Lu, Xudong, Liu, Qi, Xu, Yuhui, Zhou, Aojun, Huang, Siyuan, Zhang, Bo, Yan, Junchi, Li, Hongsheng
A pivotal advancement in the progress of large language models (LLMs) is the emergence of the Mixture-of-Experts (MoE) LLMs. Compared to traditional LLMs, MoE LLMs can achieve higher performance with fewer parameters, but it is still hard to deploy t
Externí odkaz:
http://arxiv.org/abs/2402.14800
Autor:
Han, Jiaming, Zhang, Renrui, Shao, Wenqi, Gao, Peng, Xu, Peng, Xiao, Han, Zhang, Kaipeng, Liu, Chris, Wen, Song, Guo, Ziyu, Lu, Xudong, Ren, Shuai, Wen, Yafei, Chen, Xiaoxin, Yue, Xiangyu, Li, Hongsheng, Qiao, Yu
We present ImageBind-LLM, a multi-modality instruction tuning method of large language models (LLMs) via ImageBind. Existing works mainly focus on language and image instruction tuning, different from which, our ImageBind-LLM can respond to multi-mod
Externí odkaz:
http://arxiv.org/abs/2309.03905