Zobrazeno 1 - 10
of 110
pro vyhledávání: '"Hu, Lijie"'
Autor:
Lai, Songning, Xue, Tianlang, Xiao, Hongru, Hu, Lijie, Wu, Jiemin, Feng, Ninghui, Guan, Runwei, Liao, Haicheng, Li, Zhenning, Yue, Yutao
Recent advancements in autonomous driving have seen a paradigm shift towards end-to-end learning paradigms, which map sensory inputs directly to driving actions, thereby enhancing the robustness and adaptability of autonomous vehicles. However, these
Externí odkaz:
http://arxiv.org/abs/2409.10330
Concept Bottleneck Models (CBMs) have garnered increasing attention due to their ability to provide concept-based explanations for black-box deep learning models while achieving high final prediction accuracy using human-like concepts. However, the t
Externí odkaz:
http://arxiv.org/abs/2406.18992
Autor:
Hu, Lijie, Liu, Liang, Yang, Shu, Chen, Xin, Xiao, Hongru, Li, Mengdi, Zhou, Pan, Ali, Muhammad Asif, Wang, Di
Chain-of-Thought (CoT) holds a significant place in augmenting the reasoning performance for large language models (LLMs). While some studies focus on improving CoT accuracy through methods like retrieval enhancement, yet a rigorous explanation for w
Externí odkaz:
http://arxiv.org/abs/2406.12255
With the advancement of image-to-image diffusion models guided by text, significant progress has been made in image editing. However, a persistent challenge remains in seamlessly incorporating objects into images based on textual instructions, withou
Externí odkaz:
http://arxiv.org/abs/2405.19708
Concept Bottleneck Models (CBMs) have garnered much attention for their ability to elucidate the prediction process through a human-understandable concept layer. However, most previous studies focused on cases where the data, including concepts, are
Externí odkaz:
http://arxiv.org/abs/2405.15476
Autor:
Cheng, Keyuan, Ali, Muhammad Asif, Yang, Shu, Lin, Gang, Zhai, Yuxuan, Fei, Haoyang, Xu, Ke, Yu, Lu, Hu, Lijie, Wang, Di
Multi-hop Question Answering (MQA) under knowledge editing (KE) is a key challenge in Large Language Models (LLMs). While best-performing solutions in this domain use a plan and solve paradigm to split a question into sub-questions followed by respon
Externí odkaz:
http://arxiv.org/abs/2405.15452
Is the Text to Motion model robust? Recent advancements in Text to Motion models primarily stem from more accurate predictions of specific actions. However, the text modality typically relies solely on pre-trained Contrastive Language-Image Pretraini
Externí odkaz:
http://arxiv.org/abs/2405.01461
Autor:
Cheng, Keyuan, Lin, Gang, Fei, Haoyang, zhai, Yuxuan, Yu, Lu, Ali, Muhammad Asif, Hu, Lijie, Wang, Di
Multi-hop question answering (MQA) under knowledge editing (KE) has garnered significant attention in the era of large language models. However, existing models for MQA under KE exhibit poor performance when dealing with questions containing explicit
Externí odkaz:
http://arxiv.org/abs/2404.00492
Autor:
Ali, Muhammad Asif, Li, Zhengping, Yang, Shu, Cheng, Keyuan, Cao, Yang, Huang, Tianhao, Hu, Lijie, Yu, Lu, Wang, Di
Large language models (LLMs) have shown exceptional abilities for multiple different natural language processing tasks. While prompting is a crucial tool for LLM inference, we observe that there is a significant cost associated with exceedingly lengt
Externí odkaz:
http://arxiv.org/abs/2404.00489
Autor:
Yang, Shu, Su, Jiayuan, Jiang, Han, Li, Mengdi, Cheng, Keyuan, Ali, Muhammad Asif, Hu, Lijie, Wang, Di
With the rise of large language models (LLMs), ensuring they embody the principles of being helpful, honest, and harmless (3H), known as Human Alignment, becomes crucial. While existing alignment methods like RLHF, DPO, etc., effectively fine-tune LL
Externí odkaz:
http://arxiv.org/abs/2404.00486