Zobrazeno 1 - 10
of 463
pro vyhledávání: '"ITÔ, HIROKI"'
Autor:
Ito, Hiroki
甲第24656号
農博第2539号
新制||農||1097(附属図書館)
学位論文||R5||N5437(農学部図書室)
学位規則第4条第1項該当
Doctor of Agricultural Science
Kyoto University
DGAM
農博第2539号
新制||農||1097(附属図書館)
学位論文||R5||N5437(農学部図書室)
学位規則第4条第1項該当
Doctor of Agricultural Science
Kyoto University
DGAM
Externí odkaz:
http://hdl.handle.net/2433/283775
Autor:
伊藤, 裕貴, ITO, HIROKI
名古屋大学博士学位論文 学位の種類:博士(数理学) 学位授与年月日:2013-06-28
Externí odkaz:
http://hdl.handle.net/2237/18613
In this paper, we propose an access control method with a secret key for object detection models for the first time so that unauthorized users without a secret key cannot benefit from the performance of trained models. The method enables us not only
Externí odkaz:
http://arxiv.org/abs/2209.14831
Autor:
Ito, Hiroki, Kamada, Seiichi
Twisted links are a generalization of classical links and correspond to stably equivalence classes of links in thickened surfaces. In this paper we introduce twisted intersection colorings of a diagram and construct two invariants of a twisted link u
Externí odkaz:
http://arxiv.org/abs/2207.10867
In this paper, we propose an access control method with a secret key for semantic segmentation models for the first time so that unauthorized users without a secret key cannot benefit from the performance of trained models. The method enables us not
Externí odkaz:
http://arxiv.org/abs/2206.05422
In this paper, we propose an access control method for object detection models. The use of encrypted images or encrypted feature maps has been demonstrated to be effective in access control of models from unauthorized access. However, the effectivene
Externí odkaz:
http://arxiv.org/abs/2202.00265
In this paper, we propose an access control method that uses the spatially invariant permutation of feature maps with a secret key for protecting semantic segmentation models. Segmentation models are trained and tested by permuting selected feature m
Externí odkaz:
http://arxiv.org/abs/2109.01332
Since production-level trained deep neural networks (DNNs) are of a great business value, protecting such DNN models against copyright infringement and unauthorized access is in a rising demand. However, conventional model protection methods focused
Externí odkaz:
http://arxiv.org/abs/2107.09362
Autor:
Ito, Hiroki, Sakamaki, Kentaro, Young, Grace J., Blair, Peter S., Hashim, Hashim, Lane, J. Athene, Kobayashi, Kazuki, Clout, Madeleine, Abrams, Paul, Chapple, Christopher, Malde, Sachin, Drake, Marcus J.
Publikováno v:
In European Urology Focus January 2024 10(1):197-204
Image Transformation Network for Privacy-Preserving Deep Neural Networks and Its Security Evaluation
We propose a transformation network for generating visually-protected images for privacy-preserving DNNs. The proposed transformation network is trained by using a plain image dataset so that plain images are transformed into visually protected ones.
Externí odkaz:
http://arxiv.org/abs/2008.03143