Zobrazeno 1 - 10
of 3 667
pro vyhledávání: '"Shao, Jing"'
Autor:
Qin, Yiran, Shi, Zhelun, Yu, Jiwen, Wang, Xijun, Zhou, Enshen, Li, Lijun, Yin, Zhenfei, Liu, Xihui, Sheng, Lu, Shao, Jing, Bai, Lei, Ouyang, Wanli, Zhang, Ruimao
Recent advancements in predictive models have demonstrated exceptional capabilities in predicting the future state of objects and scenes. However, the lack of categorization based on inherent characteristics continues to hinder the progress of predic
Externí odkaz:
http://arxiv.org/abs/2410.18072
Ensuring awareness of fairness and privacy in Large Language Models (LLMs) is critical. Interestingly, we discover a counter-intuitive trade-off phenomenon that enhancing an LLM's privacy awareness through Supervised Fine-Tuning (SFT) methods signifi
Externí odkaz:
http://arxiv.org/abs/2410.16672
Protecting the intellectual property of open-source Large Language Models (LLMs) is very important, because training LLMs costs extensive computational resources and data. Therefore, model owners and third parties need to identify whether a suspect m
Externí odkaz:
http://arxiv.org/abs/2410.14273
Autor:
Ren, Qibing, Li, Hao, Liu, Dongrui, Xie, Zhanxu, Lu, Xiaoya, Qiao, Yu, Sha, Lei, Yan, Junchi, Ma, Lizhuang, Shao, Jing
This study exposes the safety vulnerabilities of Large Language Models (LLMs) in multi-turn interactions, where malicious users can obscure harmful intents across several queries. We introduce ActorAttack, a novel multi-turn attack method inspired by
Externí odkaz:
http://arxiv.org/abs/2410.10700
Personality psychologists have analyzed the relationship between personality and safety behaviors in human society. Although Large Language Models (LLMs) demonstrate personality traits, the relationship between personality traits and safety abilities
Externí odkaz:
http://arxiv.org/abs/2407.12344
Recent advances in learning multi-modal representation have witnessed the success in biomedical domains. While established techniques enable handling multi-modal information, the challenges are posed when extended to various clinical modalities and p
Externí odkaz:
http://arxiv.org/abs/2407.05540
Autor:
Xiao, Yisong, Liu, Aishan, Cheng, QianJia, Yin, Zhenfei, Liang, Siyuan, Li, Jiapeng, Shao, Jing, Liu, Xianglong, Tao, Dacheng
Large Vision-Language Models (LVLMs) have been widely adopted in various applications; however, they exhibit significant gender biases. Existing benchmarks primarily evaluate gender bias at the demographic group level, neglecting individual fairness,
Externí odkaz:
http://arxiv.org/abs/2407.00600
Autor:
Zhang, Yongting, Chen, Lu, Zheng, Guodong, Gao, Yifeng, Zheng, Rui, Fu, Jinlan, Yin, Zhenfei, Jin, Senjie, Qiao, Yu, Huang, Xuanjing, Zhao, Feng, Gui, Tao, Shao, Jing
The emergence of Vision Language Models (VLMs) has brought unprecedented advances in understanding multimodal information. The combination of textual and visual semantics in VLMs is highly complex and diverse, making the safety alignment of these mod
Externí odkaz:
http://arxiv.org/abs/2406.12030
Autor:
Zhang, Zaibin, Tang, Shiyu, Zhang, Yuanhang, Fu, Talas, Wang, Yifan, Liu, Yang, Wang, Dong, Shao, Jing, Wang, Lijun, Lu, Huchuan
Due to the impressive capabilities of multimodal large language models (MLLMs), recent works have focused on employing MLLM-based agents for autonomous driving in large-scale and dynamic environments. However, prevalent approaches often directly tran
Externí odkaz:
http://arxiv.org/abs/2406.03474
Autor:
Chen, Zeren, Shi, Zhelun, Lu, Xiaoya, He, Lehan, Qian, Sucheng, Fang, Hao Shu, Yin, Zhenfei, Ouyang, Wanli, Shao, Jing, Qiao, Yu, Lu, Cewu, Sheng, Lu
The ultimate goals of robotic learning is to acquire a comprehensive and generalizable robotic system capable of performing both seen skills within the training distribution and unseen skills in novel environments. Recent progress in utilizing langua
Externí odkaz:
http://arxiv.org/abs/2403.19622