Zobrazeno 1 - 10
of 16 430
pro vyhledávání: '"model inversion"'
Model inversion attacks pose a significant privacy threat to machine learning models by reconstructing sensitive data from their outputs. While various defenses have been proposed to counteract these attacks, they often come at the cost of the classi
Externí odkaz:
http://arxiv.org/abs/2412.07575
Autor:
Liu, Zhen-Ting, Chen, Shang-Tse
Model Inversion (MI) attacks pose a significant threat to the privacy of Deep Neural Networks by recovering training data distribution from well-trained models. While existing defenses often rely on regularization techniques to reduce information lea
Externí odkaz:
http://arxiv.org/abs/2411.08460
The success of deep neural networks has driven numerous research studies and applications from Euclidean to non-Euclidean data. However, there are increasing concerns about privacy leakage, as these networks rely on processing private data. Recently,
Externí odkaz:
http://arxiv.org/abs/2411.10023
Model Inversion Attacks (MIAs) aim at recovering privacy-sensitive training data from the knowledge encoded in the released machine learning models. Recent advances in the MIA field have significantly enhanced the attack performance under multiple sc
Externí odkaz:
http://arxiv.org/abs/2410.05814
Model Inversion (MI) attacks aim at leveraging the output information of target models to reconstruct privacy-sensitive training data, raising widespread concerns on privacy threats of Deep Neural Networks (DNNs). Unfortunately, in tandem with the ra
Externí odkaz:
http://arxiv.org/abs/2410.05159
Autor:
Binici, Kuluhan, Aggarwal, Shivam, Acar, Cihan, Pham, Nam Trung, Leman, Karianto, Lee, Gim Hee, Mitra, Tulika
Knowledge distillation (KD) is a key element in neural network compression that allows knowledge transfer from a pre-trained teacher model to a more compact student model. KD relies on access to the training dataset, which may not always be fully ava
Externí odkaz:
http://arxiv.org/abs/2408.13850
Skip connections are fundamental architecture designs for modern deep neural networks (DNNs) such as CNNs and ViTs. While they help improve model performance significantly, we identify a vulnerability associated with skip connections to Model Inversi
Externí odkaz:
http://arxiv.org/abs/2409.01696
Model Inversion (MI) is a type of privacy violation that focuses on reconstructing private training data through abusive exploitation of machine learning models. To defend against MI attacks, state-of-the-art (SOTA) MI defense methods rely on regular
Externí odkaz:
http://arxiv.org/abs/2409.01062
Model Inversion (MI) attacks aim to reconstruct privacy-sensitive training data from released models by utilizing output information, raising extensive concerns about the security of Deep Neural Networks (DNNs). Recent advances in generative adversar
Externí odkaz:
http://arxiv.org/abs/2407.13863
Model inversion attacks (MIAs) aim to reconstruct private images from a target classifier's training set, thereby raising privacy concerns in AI applications. Previous GAN-based MIAs tend to suffer from inferior generative fidelity due to GAN's inher
Externí odkaz:
http://arxiv.org/abs/2407.11424