Zobrazeno 1 - 10
of 31
pro vyhledávání: '"Nakka, Krishna"'
In this work, we introduce PII-Scope, a comprehensive benchmark designed to evaluate state-of-the-art methodologies for PII extraction attacks targeting LLMs across diverse threat settings. Our study provides a deeper understanding of these attacks b
Externí odkaz:
http://arxiv.org/abs/2410.06704
Autor:
Frikha, Ahmed, Walha, Nassim, Mendes, Ricardo, Nakka, Krishna Kanth, Jiang, Xue, Zhou, Xuebing
This work addresses the timely yet underexplored problem of performing inference and finetuning of a proprietary LLM owned by a model provider entity on the confidential/private data of another data owner entity, in a way that ensures the confidentia
Externí odkaz:
http://arxiv.org/abs/2407.02960
Autor:
Frikha, Ahmed, Walha, Nassim, Nakka, Krishna Kanth, Mendes, Ricardo, Jiang, Xue, Zhou, Xuebing
In this work, we address the problem of text anonymization where the goal is to prevent adversaries from correctly inferring private attributes of the author, while keeping the text utility, i.e., meaning and semantics. We propose IncogniText, a tech
Externí odkaz:
http://arxiv.org/abs/2407.02956
The latest and most impactful advances in large models stem from their increased size. Unfortunately, this translates into an improved memorization capacity, raising data privacy concerns. Specifically, it has been shown that models can output person
Externí odkaz:
http://arxiv.org/abs/2407.02943
As 3D human pose estimation can now be achieved with very high accuracy in the supervised learning scenario, tackling the case where 3D pose annotations are not available has received increasing attention. In particular, several methods have proposed
Externí odkaz:
http://arxiv.org/abs/2309.11667
In recent years, the trackers based on Siamese networks have emerged as highly effective and efficient for visual object tracking (VOT). While these methods were shown to be vulnerable to adversarial attacks, as most deep networks for visual recognit
Externí odkaz:
http://arxiv.org/abs/2012.15183
Adversarial attacks have been widely studied for general classification tasks, but remain unexplored in the context of fine-grained recognition, where the inter-class similarities facilitate the attacker's task. In this paper, we identify the proximi
Externí odkaz:
http://arxiv.org/abs/2006.06028
Recently, deep networks have achieved impressive semantic segmentation performance, in particular thanks to their use of larger contextual information. In this paper, we show that the resulting networks are sensitive not only to global attacks, where
Externí odkaz:
http://arxiv.org/abs/1911.13038
Classical semantic segmentation methods, including the recent deep learning ones, assume that all classes observed at test time have been seen during training. In this paper, we tackle the more realistic scenario where unexpected objects of unknown c
Externí odkaz:
http://arxiv.org/abs/1904.07595
The standard approach to providing interpretability to deep convolutional neural networks (CNNs) consists of visualizing either their feature maps, or the image regions that contribute the most to the prediction. In this paper, we introduce an altern
Externí odkaz:
http://arxiv.org/abs/1901.02229