Zobrazeno 1 - 10
of 217
pro vyhledávání: '"Wang, ZhenTing"'
Autor:
Han, Tingxu, Sun, Weisong, Hu, Yanrong, Fang, Chunrong, Zhang, Yonglong, Ma, Shiqing, Zheng, Tao, Chen, Zhenyu, Wang, Zhenting
Text-to-image diffusion models have shown an impressive ability to generate high-quality images from input textual descriptions. However, concerns have been raised about the potential for these models to create content that infringes on copyrights or
Externí odkaz:
http://arxiv.org/abs/2412.00580
Autor:
Zhao, Shiyu, Wang, Zhenting, Juefei-Xu, Felix, Xia, Xide, Liu, Miao, Wang, Xiaofang, Liang, Mingfu, Zhang, Ning, Metaxas, Dimitris N., Yu, Licheng
Prevailing Multimodal Large Language Models (MLLMs) encode the input image(s) as vision tokens and feed them into the language backbone, similar to how Large Language Models (LLMs) process the text tokens. However, the number of vision tokens increas
Externí odkaz:
http://arxiv.org/abs/2412.00556
Recent advances in code-specific large language models (LLMs) have greatly enhanced code generation and refinement capabilities. However, the safety of code LLMs remains under-explored, posing potential risks as insecure code generated by these model
Externí odkaz:
http://arxiv.org/abs/2411.12882
Autor:
Li, Boheng, Wei, Yanhao, Fu, Yankai, Wang, Zhenting, Li, Yiming, Zhang, Jie, Wang, Run, Zhang, Tianwei
Text-to-image diffusion models are pushing the boundaries of what generative AI can achieve in our lives. Beyond their ability to generate general images, new personalization techniques have been proposed to customize the pre-trained base models for
Externí odkaz:
http://arxiv.org/abs/2410.10437
Autor:
Zhang, Hanrong, Huang, Jingyuan, Mei, Kai, Yao, Yifei, Wang, Zhenting, Zhan, Chenlu, Wang, Hongwei, Zhang, Yongfeng
Although LLM-based agents, powered by Large Language Models (LLMs), can use external tools and memory mechanisms to solve complex real-world tasks, they may also introduce critical security vulnerabilities. However, the existing literature does not c
Externí odkaz:
http://arxiv.org/abs/2410.02644
Backdoor attack is a severe threat to the trustworthiness of DNN-based language models. In this paper, we first extend the definition of memorization of language models from sample-wise to more fine-grained sentence element-wise (e.g., word, phrase,
Externí odkaz:
http://arxiv.org/abs/2409.14200
Despite prior safety alignment efforts, mainstream LLMs can still generate harmful and unethical content when subjected to jailbreaking attacks. Existing jailbreaking methods fall into two main categories: template-based and optimization-based method
Externí odkaz:
http://arxiv.org/abs/2408.11313
Autor:
Sun, Guangyan, Jin, Mingyu, Wang, Zhenting, Wang, Cheng-Long, Ma, Siqi, Wang, Qifan, Wu, Ying Nian, Zhang, Yongfeng, Liu, Dongfang
Achieving human-level intelligence requires refining cognitive distinctions between System 1 and System 2 thinking. While contemporary AI, driven by large language models, demonstrates human-like traits, it falls short of genuine cognition. Transitio
Externí odkaz:
http://arxiv.org/abs/2408.08862
Autor:
Zhang, Chong, Liu, Xinyi, Zhang, Zhongmou, Jin, Mingyu, Li, Lingyao, Wang, Zhenting, Hua, Wenyue, Shu, Dong, Zhu, Suiyuan, Jin, Xiaobo, Li, Sujian, Du, Mengnan, Zhang, Yongfeng
Can AI Agents simulate real-world trading environments to investigate the impact of external factors on stock trading activities (e.g., macroeconomics, policy changes, company fundamentals, and global events)? These factors, which frequently influenc
Externí odkaz:
http://arxiv.org/abs/2407.18957
Autor:
Zeng, Qingcheng, Jin, Mingyu, Yu, Qinkai, Wang, Zhenting, Hua, Wenyue, Zhou, Zihao, Sun, Guangyan, Meng, Yanda, Ma, Shiqing, Wang, Qifan, Juefei-Xu, Felix, Ding, Kaize, Yang, Fan, Tang, Ruixiang, Zhang, Yongfeng
Large Language Models (LLMs) are employed across various high-stakes domains, where the reliability of their outputs is crucial. One commonly used method to assess the reliability of LLMs' responses is uncertainty estimation, which gauges the likelih
Externí odkaz:
http://arxiv.org/abs/2407.11282