Zobrazeno 1 - 10
of 541
pro vyhledávání: '"Lv, JianCheng"'
The efficacy of diffusion models in generating a spectrum of data modalities, including images, text, and videos, has spurred inquiries into their utility in molecular generation, yielding significant advancements in the field. However, the molecular
Externí odkaz:
http://arxiv.org/abs/2411.05472
Autor:
Huang, Youcheng, Zhu, Fengbin, Tang, Jingkun, Zhou, Pan, Lei, Wenqiang, Lv, Jiancheng, Chua, Tat-Seng
Visual Language Models (VLMs) are vulnerable to adversarial attacks, especially those from adversarial images, which is however under-explored in literature. To facilitate research on this critical safety problem, we first construct a new laRge-scale
Externí odkaz:
http://arxiv.org/abs/2410.22888
Generative adversarial networks (GANs) have made impressive advances in image generation, but they often require large-scale training data to avoid degradation caused by discriminator overfitting. To tackle this issue, we investigate the challenge of
Externí odkaz:
http://arxiv.org/abs/2408.11135
Federated learning is often used in environments with many unverified participants. Therefore, federated learning under adversarial attacks receives significant attention. This paper proposes an algorithmic framework for list-decodable federated lear
Externí odkaz:
http://arxiv.org/abs/2408.04963
Autor:
Ma, Xiaochen, Zhu, Xuekang, Su, Lei, Du, Bo, Jiang, Zhuohang, Tong, Bingkui, Lei, Zeyu, Yang, Xinyu, Pun, Chi-Man, Lv, Jiancheng, Zhou, Jizhe
A comprehensive benchmark is yet to be established in the Image Manipulation Detection & Localization (IMDL) field. The absence of such a benchmark leads to insufficient and misleading model evaluations, severely undermining the development of this f
Externí odkaz:
http://arxiv.org/abs/2406.10580
Autor:
Huang, Youcheng, Tang, Jingkun, Feng, Duanyu, Zhang, Zheng, Lei, Wenqiang, Lv, Jiancheng, Cohn, Anthony G.
People tell lies when seeking rewards. Large language models (LLMs) are aligned to human values with reinforcement learning where they get rewards if they satisfy human preference. We find that this also induces dishonesty in helpful and harmless ali
Externí odkaz:
http://arxiv.org/abs/2406.01931
To obtain high-quality annotations under limited budget, semi-automatic annotation methods are commonly used, where a portion of the data is annotated by experts and a model is then trained to complete the annotations for the remaining data. However,
Externí odkaz:
http://arxiv.org/abs/2405.12081
Human annotation is a time-consuming task that requires a significant amount of effort. To address this issue, interactive data annotation utilizes an annotation model to provide suggestions for humans to approve or correct. However, annotation model
Externí odkaz:
http://arxiv.org/abs/2405.11912
Recent efforts have aimed to improve AI machines in legal case matching by integrating legal domain knowledge. However, successful legal case matching requires the tacit knowledge of legal practitioners, which is difficult to verbalize and encode int
Externí odkaz:
http://arxiv.org/abs/2405.10248
Large Language Models (LLMs) have made great strides in areas such as language processing and computer vision. Despite the emergence of diverse techniques to improve few-shot learning capacity, current LLMs fall short in handling the languages in bio
Externí odkaz:
http://arxiv.org/abs/2405.06690