Zobrazeno 1 - 10
of 1 394
pro vyhledávání: '"membership inference attack"'
The lack of data transparency in Large Language Models (LLMs) has highlighted the importance of Membership Inference Attack (MIA), which differentiates trained (member) and untrained (non-member) data. Though it shows success in previous studies, rec
Externí odkaz:
http://arxiv.org/abs/2412.13475
Recent advances in Large Language Models (LLMs) have enabled them to overcome their context window limitations, and demonstrate exceptional retrieval and reasoning capacities on longer context. Quesion-answering systems augmented with Long-Context La
Externí odkaz:
http://arxiv.org/abs/2411.11424
Autor:
Xia, Fan1 (AUTHOR) xiafan982@outlook.com, Liu, Yuhao2 (AUTHOR) liuyuhao_eic@hust.edu.cn, Jin, Bo1 (AUTHOR) jinbo724@126.com, Yu, Zheng1 (AUTHOR) yuzheng561@outlook.com, Cai, Xingwei2 (AUTHOR) caixingwei@hust.edu.cn, Li, Hao2 (AUTHOR) lihao_eic@hust.edu.cn, Zha, Zhiyong1 (AUTHOR) hustboyzzy@163.com, Hou, Dai1 (AUTHOR) hou_dai2024@outlook.com, Peng, Kai2 (AUTHOR) pkhust@hust.edu.cn
Publikováno v:
Symmetry (20738994). Dec2024, Vol. 16 Issue 12, p1677. 23p.
Masked Image Modeling (MIM) has achieved significant success in the realm of self-supervised learning (SSL) for visual recognition. The image encoder pre-trained through MIM, involving the masking and subsequent reconstruction of input images, attain
Externí odkaz:
http://arxiv.org/abs/2408.06825
Given the rising popularity of AI-generated art and the associated copyright concerns, identifying whether an artwork was used to train a diffusion model is an important research topic. The work approaches this problem from the membership inference a
Externí odkaz:
http://arxiv.org/abs/2405.20771
While Deep Neural Networks (DNNs) have demonstrated remarkable performance in tasks related to perception and control, there are still several unresolved concerns regarding the privacy of their training data, particularly in the context of vulnerabil
Externí odkaz:
http://arxiv.org/abs/2405.07562
Autor:
Li, Hao, Li, Zheng, Wu, Siyuan, Hu, Chengrui, Ye, Yutong, Zhang, Min, Feng, Dengguo, Zhang, Yang
Most existing membership inference attacks (MIAs) utilize metrics (e.g., loss) calculated on the model's final state, while recent advanced attacks leverage metrics computed at various stages, including both intermediate and final stages, throughout
Externí odkaz:
http://arxiv.org/abs/2407.15098
With the rapid advancements of large-scale text-to-image diffusion models, various practical applications have emerged, bringing significant convenience to society. However, model developers may misuse the unauthorized data to train diffusion models.
Externí odkaz:
http://arxiv.org/abs/2407.13252
Autor:
Mozaffari, Hamid, Marathe, Virendra J.
Membership Inference Attacks (MIAs) determine whether a specific data point was included in the training set of a target model. In this paper, we introduce the Semantic Membership Inference Attack (SMIA), a novel approach that enhances MIA performanc
Externí odkaz:
http://arxiv.org/abs/2406.10218
Autor:
Ahamed, Sayyed Farid, Banerjee, Soumya, Roy, Sandip, Quinn, Devin, Vucovich, Marc, Choi, Kevin, Rahman, Abdul, Hu, Alison, Bowen, Edward, Shetty, Sachin
Over the last few years, federated learning (FL) has emerged as a prominent method in machine learning, emphasizing privacy preservation by allowing multiple clients to collaboratively build a model while keeping their training data private. Despite
Externí odkaz:
http://arxiv.org/abs/2407.19119