Zobrazeno 1 - 10
of 118
pro vyhledávání: '"Liu, Aishan"'
Autor:
Yang, Ge, He, Changyi, Guo, Jinyang, Wu, Jianyu, Ding, Yifu, Liu, Aishan, Qin, Haotong, Ji, Pengliang, Liu, Xianglong
Although large language models (LLMs) have demonstrated their strong intelligence ability, the high demand for computation and storage hinders their practical application. To this end, many model compression techniques are proposed to increase the ef
Externí odkaz:
http://arxiv.org/abs/2410.21352
Autor:
Ying, Zonghao, Liu, Aishan, Liang, Siyuan, Huang, Lei, Guo, Jinyang, Zhou, Wenbo, Liu, Xianglong, Tao, Dacheng
Multimodal Large Language Models (MLLMs) are showing strong safety concerns (e.g., generating harmful outputs for users), which motivates the development of safety evaluation benchmarks. However, we observe that existing safety benchmarks for MLLMs s
Externí odkaz:
http://arxiv.org/abs/2410.18927
Autor:
Zhang, Tianyuan, Wang, Lu, Kang, Jiaqi, Zhang, Xinwei, Liang, Siyuan, Chen, Yuwei, Liu, Aishan, Liu, Xianglong
Recent advances in deep learning have markedly improved autonomous driving (AD) models, particularly end-to-end systems that integrate perception, prediction, and planning stages, achieving state-of-the-art performance. However, these models remain v
Externí odkaz:
http://arxiv.org/abs/2409.07321
Autor:
Tang, Kunsheng, Zhou, Wenbo, Zhang, Jie, Liu, Aishan, Deng, Gelei, Li, Shuai, Qi, Peigui, Zhang, Weiming, Zhang, Tianwei, Yu, Nenghai
Large language models (LLMs) have exhibited remarkable capabilities in natural language generation, but they have also been observed to magnify societal biases, particularly those related to gender. In response to this issue, several benchmarks have
Externí odkaz:
http://arxiv.org/abs/2408.12494
Autor:
Liu, Aishan, Zhou, Yuguang, Liu, Xianglong, Zhang, Tianyuan, Liang, Siyuan, Wang, Jiakai, Pu, Yanjun, Li, Tianlin, Zhang, Junqi, Zhou, Wenbo, Guo, Qing, Tao, Dacheng
Large language models (LLMs) have transformed the development of embodied intelligence. By providing a few contextual demonstrations, developers can utilize the extensive internal knowledge of LLMs to effortlessly translate complex tasks described in
Externí odkaz:
http://arxiv.org/abs/2408.02882
Autor:
Zhang, Hangtao, Zhu, Chenyu, Wang, Xianlong, Zhou, Ziqi, Yin, Changgan, Li, Minghui, Xue, Lulu, Wang, Yichen, Hu, Shengshan, Liu, Aishan, Guo, Peijin, Zhang, Leo Yu
Embodied AI represents systems where AI is integrated into physical entities, enabling them to perceive and interact with their surroundings. Large Language Model (LLM), which exhibits powerful language understanding abilities, has been extensively e
Externí odkaz:
http://arxiv.org/abs/2407.20242
Autor:
Xiao, Yisong, Liu, Aishan, Cheng, QianJia, Yin, Zhenfei, Liang, Siyuan, Li, Jiapeng, Shao, Jing, Liu, Xianglong, Tao, Dacheng
Large Vision-Language Models (LVLMs) have been widely adopted in various applications; however, they exhibit significant gender biases. Existing benchmarks primarily evaluate gender bias at the demographic group level, neglecting individual fairness,
Externí odkaz:
http://arxiv.org/abs/2407.00600
Autor:
Liang, Siyuan, Liang, Jiawei, Pang, Tianyu, Du, Chao, Liu, Aishan, Chang, Ee-Chien, Cao, Xiaochun
Instruction tuning enhances large vision-language models (LVLMs) but raises security risks through potential backdoor attacks due to their openness. Previous backdoor studies focus on enclosed scenarios with consistent training and testing instructio
Externí odkaz:
http://arxiv.org/abs/2406.18844
The recent release of GPT-4o has garnered widespread attention due to its powerful general capabilities. While its impressive performance is widely acknowledged, its safety aspects have not been sufficiently explored. Given the potential societal imp
Externí odkaz:
http://arxiv.org/abs/2406.06302
Autor:
Ying, Zonghao, Liu, Aishan, Zhang, Tianyuan, Yu, Zhengmin, Liang, Siyuan, Liu, Xianglong, Tao, Dacheng
In the realm of large vision language models (LVLMs), jailbreak attacks serve as a red-teaming approach to bypass guardrails and uncover safety implications. Existing jailbreaks predominantly focus on the visual modality, perturbing solely visual inp
Externí odkaz:
http://arxiv.org/abs/2406.04031