Zobrazeno 1 - 10
of 2 173
pro vyhledávání: '"WANG, YINGCHUN"'
Model extraction attacks are one type of inference-time attacks that approximate the functionality and performance of a black-box victim model by launching a certain number of queries to the model and then leveraging the model's predictions to train
Externí odkaz:
http://arxiv.org/abs/2501.01090
Ensuring Artificial General Intelligence (AGI) reliably avoids harmful behaviors is a critical challenge, especially for systems with high autonomy or in safety-critical domains. Despite various safety assurance proposals and extreme risk warnings, c
Externí odkaz:
http://arxiv.org/abs/2412.14186
The ability to adapt beliefs or behaviors in response to unexpected outcomes, reflection, is fundamental to intelligent systems' interaction with the world. From a cognitive science perspective, this serves as a core principle of intelligence applica
Externí odkaz:
http://arxiv.org/abs/2410.16270
Large Language Models (LLMs) can memorize sensitive information, raising concerns about potential misuse. LLM Unlearning, a post-hoc approach to remove this information from trained LLMs, offers a promising solution to mitigate these risks. However,
Externí odkaz:
http://arxiv.org/abs/2409.11844
Autor:
Li, Junyu, Zhang, Ye, Shu, Wen, Feng, Xiaobing, Wang, Yingchun, Yan, Pengju, Li, Xiaolin, Sha, Chulin, He, Min
Multiple instance learning (MIL) has been successfully applied for whole slide images (WSIs) analysis in computational pathology, enabling a wide range of prediction tasks from tumor subtyping to inferring genetic mutations and multi-omics biomarkers
Externí odkaz:
http://arxiv.org/abs/2407.17267
Autor:
Wang, Xuhong, Jiang, Haoyu, Yu, Yi, Yu, Jingru, Lin, Yilun, Yi, Ping, Wang, Yingchun, Qiao, Yu, Li, Li, Wang, Fei-Yue
Large Language Models (LLMs) are increasingly integrated into diverse industries, posing substantial security risks due to unauthorized replication and misuse. To mitigate these concerns, robust identification mechanisms are widely acknowledged as an
Externí odkaz:
http://arxiv.org/abs/2407.11100
Autor:
Zhao, Haiquan, Li, Lingyu, Chen, Shisong, Kong, Shuqi, Wang, Jiaan, Huang, Kexin, Gu, Tianle, Wang, Yixu, Jian, Wang, Liang, Dandan, Li, Zhixu, Teng, Yan, Xiao, Yanghua, Wang, Yingchun
Emotion Support Conversation (ESC) is a crucial application, which aims to reduce human stress, offer emotional guidance, and ultimately enhance human mental and physical well-being. With the advancement of Large Language Models (LLMs), many research
Externí odkaz:
http://arxiv.org/abs/2406.14952
Autor:
Gu, Tianle, Zhou, Zeyang, Huang, Kexin, Liang, Dandan, Wang, Yixu, Zhao, Haiquan, Yao, Yuanqi, Qiao, Xingge, Wang, Keqing, Yang, Yujiu, Teng, Yan, Qiao, Yu, Wang, Yingchun
Powered by remarkable advancements in Large Language Models (LLMs), Multimodal Large Language Models (MLLMs) demonstrate impressive capabilities in manifold tasks. However, the practical application scenarios of MLLMs are intricate, exposing them to
Externí odkaz:
http://arxiv.org/abs/2406.07594
Autor:
Lu, Chaochao, Qian, Chen, Zheng, Guodong, Fan, Hongxing, Gao, Hongzhi, Zhang, Jie, Shao, Jing, Deng, Jingyi, Fu, Jinlan, Huang, Kexin, Li, Kunchang, Li, Lijun, Wang, Limin, Sheng, Lu, Chen, Meiqi, Zhang, Ming, Ren, Qibing, Chen, Sirui, Gui, Tao, Ouyang, Wanli, Wang, Yali, Teng, Yan, Wang, Yaru, Wang, Yi, He, Yinan, Wang, Yingchun, Wang, Yixu, Zhang, Yongting, Qiao, Yu, Shen, Yujiong, Mou, Yurong, Chen, Yuxi, Zhang, Zaibin, Shi, Zhelun, Yin, Zhenfei, Wang, Zhipin
Multi-modal Large Language Models (MLLMs) have shown impressive abilities in generating reasonable responses with respect to multi-modal contents. However, there is still a wide gap between the performance of recent MLLM-based applications and the ex
Externí odkaz:
http://arxiv.org/abs/2401.15071
Autor:
Huang, Kexin, Liu, Xiangyang, Guo, Qianyu, Sun, Tianxiang, Sun, Jiawei, Wang, Yaru, Zhou, Zeyang, Wang, Yixu, Teng, Yan, Qiu, Xipeng, Wang, Yingchun, Lin, Dahua
The widespread adoption of large language models (LLMs) across various regions underscores the urgent need to evaluate their alignment with human values. Current benchmarks, however, fall short of effectively uncovering safety vulnerabilities in LLMs
Externí odkaz:
http://arxiv.org/abs/2311.06899