Zobrazeno 1 - 10
of 68
pro vyhledávání: '"HUANG Xijie"'
Autor:
HUANG Xijie
Publikováno v:
Zhihui kongzhi yu fangzhen, Vol 46, Iss 2, Pp 115-121 (2024)
Aiming at the problems of difficult implementation and poor robustness of quadrotor UAV attitude control in unknown environment, an intelligent attitude control method based on deep deterministic strategy (DDPG) algorithm is proposed. Firstly, based
Externí odkaz:
https://doaj.org/article/d3b81aa35f6544039b71e876f8f933ca
Autor:
Liang, Hao, Sun, Linzhuang, Wei, Jingxuan, Huang, Xijie, Sun, Linkun, Yu, Bihui, He, Conghui, Zhang, Wentao
In recent years, with the rapid advancements in large language models (LLMs), achieving excellent empathetic response capabilities has become a crucial prerequisite. Consequently, managing and understanding empathetic datasets have gained increasing
Externí odkaz:
http://arxiv.org/abs/2407.21669
Autor:
Liu, Zheng, Liang, Hao, Huang, Xijie, Xiong, Wentao, Yu, Qinhan, Sun, Linzhuang, Chen, Chong, He, Conghui, Cui, Bin, Zhang, Wentao
Recently, with the rise of web images, managing and understanding large-scale image datasets has become increasingly important. Vision Large Language Models (VLLMs) have recently emerged due to their robust vision-understanding capabilities. However,
Externí odkaz:
http://arxiv.org/abs/2407.20756
Low-Rank Adaptation (LoRA), as a representative Parameter-Efficient Fine-Tuning (PEFT)method, significantly enhances the training efficiency by updating only a small portion of the weights in Large Language Models (LLMs). Recently, weight-only quanti
Externí odkaz:
http://arxiv.org/abs/2407.08044
Autor:
Liang, Hao, Li, Jiapeng, Bai, Tianyi, Huang, Xijie, Sun, Linzhuang, Wang, Zhengren, He, Conghui, Cui, Bin, Chen, Chong, Zhang, Wentao
Recently, with the rise of web videos, managing and understanding large-scale video datasets has become increasingly important. Video Large Language Models (VideoLLMs) have emerged in recent years due to their strong video understanding capabilities.
Externí odkaz:
http://arxiv.org/abs/2407.03104
Autor:
Huang, Xijie, Wang, Xinyuan, Zhang, Hantao, Zhu, Yinghao, Xi, Jiawen, An, Jingkun, Wang, Hao, Liang, Hao, Pan, Chengwei
Security concerns related to Large Language Models (LLMs) have been extensively explored, yet the safety implications for Multimodal Large Language Models (MLLMs), particularly in medical contexts (MedMLLMs), remain insufficiently studied. This paper
Externí odkaz:
http://arxiv.org/abs/2405.20775
Autor:
Dong, Pingcheng, Tan, Yonghao, Zhang, Dong, Ni, Tianwei, Liu, Xuejiao, Liu, Yu, Luo, Peng, Liang, Luhong, Liu, Shih-Yang, Huang, Xijie, Zhu, Huaiyu, Pan, Yun, An, Fengwei, Cheng, Kwang-Ting
Non-linear functions are prevalent in Transformers and their lightweight variants, incurring substantial and frequently underestimated hardware costs. Previous state-of-the-art works optimize these operations by piece-wise linear approximation and st
Externí odkaz:
http://arxiv.org/abs/2403.19591
Large Language Models (LLMs) have shown impressive capabilities, yet they still struggle with math reasoning. In this work, we propose CoT-Influx, a novel approach that pushes the boundary of few-shot Chain-of-Thoughts (CoT) learning to improve LLM m
Externí odkaz:
http://arxiv.org/abs/2312.08901
Autor:
Wu, Chi-hsuan, Liu, Shih-yang, Huang, Xijie, Wang, Xingbo, Zhang, Rong, Minciullo, Luca, Yiu, Wong Kai, Kwan, Kenny, Cheng, Kwang-Ting
Online learning is a rapidly growing industry. However, a major doubt about online learning is whether students are as engaged as they are in face-to-face classes. An engagement recognition system can notify the instructors about the students conditi
Externí odkaz:
http://arxiv.org/abs/2312.09066
Publikováno v:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
We propose LLM-FP4 for quantizing both weights and activations in large language models (LLMs) down to 4-bit floating-point values, in a post-training manner. Existing post-training quantization (PTQ) solutions are primarily integer-based and struggl
Externí odkaz:
http://arxiv.org/abs/2310.16836