Zobrazeno 1 - 10
of 33
pro vyhledávání: '"AprilPyone, MaungMaung"'
In this paper, we propose a combined use of transformed images and vision transformer (ViT) models transformed with a secret key. We show for the first time that models trained with plain images can be directly transformed to models trained with encr
Externí odkaz:
http://arxiv.org/abs/2207.05366
Publikováno v:
IEEE Open Journal of Signal Processing, Vol 5, Pp 902-913 (2024)
In this paper, we propose key-based defense model proliferation by leveraging pre-trained models and utilizing recent efficient fine-tuning techniques on ImageNet-1 k classification. First, we stress that deploying key-based models on edge devices is
Externí odkaz:
https://doaj.org/article/25df2eda185948e3bcd062cbd11f3d91
Deep neural network (DNN) models are wellknown to easily misclassify prediction results by using input images with small perturbations, called adversarial examples. In this paper, we propose a novel adversarial detector, which consists of a robust cl
Externí odkaz:
http://arxiv.org/abs/2202.02503
In this paper, we propose an access control method that uses the spatially invariant permutation of feature maps with a secret key for protecting semantic segmentation models. Segmentation models are trained and tested by permuting selected feature m
Externí odkaz:
http://arxiv.org/abs/2109.01332
Autor:
AprilPyone, MaungMaung, Kiya, Hitoshi
In this paper, we propose a model protection method for convolutional neural networks (CNNs) with a secret key so that authorized users get a high classification accuracy, and unauthorized users get a low classification accuracy. The proposed method
Externí odkaz:
http://arxiv.org/abs/2109.00224
Since production-level trained deep neural networks (DNNs) are of a great business value, protecting such DNN models against copyright infringement and unauthorized access is in a rising demand. However, conventional model protection methods focused
Externí odkaz:
http://arxiv.org/abs/2107.09362
Autor:
AprilPyone, MaungMaung, Kiya, Hitoshi
In this paper, we propose a novel DNN watermarking method that utilizes a learnable image transformation method with a secret key. The proposed method embeds a watermark pattern in a model by using learnable transformed images and allows us to remote
Externí odkaz:
http://arxiv.org/abs/2104.04241
Autor:
AprilPyone, MaungMaung, Kiya, Hitoshi
We propose a novel method for protecting trained models with a secret key so that unauthorized users without the correct key cannot get the correct inference. By taking advantage of transfer learning, the proposed method enables us to train a large p
Externí odkaz:
http://arxiv.org/abs/2103.03525
Autor:
AprilPyone, MaungMaung, Kiya, Hitoshi
We propose a voting ensemble of models trained by using block-wise transformed images with secret keys for an adversarially robust defense. Key-based adversarial defenses were demonstrated to outperform state-of-the-art defenses against gradient-base
Externí odkaz:
http://arxiv.org/abs/2011.07697
Autor:
AprilPyone, MaungMaung, Kiya, Hitoshi
In this paper, we propose a novel defensive transformation that enables us to maintain a high classification accuracy under the use of both clean images and adversarial examples for adversarially robust defense. The proposed transformation is a block
Externí odkaz:
http://arxiv.org/abs/2010.00801