Zobrazeno 1 - 10
of 275
pro vyhledávání: '"membership inference attack"'
Publikováno v:
International Journal of Web Information Systems, 2023, Vol. 19, Issue 2, pp. 61-79.
Externí odkaz:
http://www.emeraldinsight.com/doi/10.1108/IJWIS-03-2023-0050
Autor:
Huan Xu, Zhanhao Zhang, Xiaodong Yu, Yingbo Wu, Zhiyong Zha, Bo Xu, Wenfeng Xu, Menglan Hu, Kai Peng
Publikováno v:
Applied Sciences, Vol 14, Iss 16, p 7118 (2024)
A large language model refers to a deep learning model characterized by extensive parameters and pretraining on a large-scale corpus, utilized for processing natural language text and generating high-quality text output. The increasing deployment of
Externí odkaz:
https://doaj.org/article/e651395463cc4ed9b5eb0316f6449f9b
Publikováno v:
Taiyuan Ligong Daxue xuebao, Vol 54, Iss 5, Pp 763-772 (2023)
Purposes Focusing on the issue that the machine learning model may leak the privacy of training data during training process, which could be used by membership inference attacks, and then for stealing the sensitive information of users, an Expectatio
Externí odkaz:
https://doaj.org/article/a99682dcb9e7463e9179b1e70e13eff6
Publikováno v:
网络与信息安全学报, Vol 9, Pp 29-39 (2023)
In recent years, deep learning has emerged as a crucial technology in various fields.However, the training process of deep learning models often requires a substantial amount of data, which may contain private and sensitive information such as person
Externí odkaz:
https://doaj.org/article/0f4514b3b34546ad9db0f77837a0c0bf
Publikováno v:
Applied Mathematics and Nonlinear Sciences, Vol 9, Iss 1 (2024)
The power consumption information collection system encompasses multiple complex technical relationships, along the data flow chain, numerous data conversion links and processing activities, as well as a multitude of threat exposure surfaces, trigger
Externí odkaz:
https://doaj.org/article/7b2926f9a2494c448741a9bf1f6f30c2
Publikováno v:
Tongxin xuebao, Vol 44, Pp 193-205 (2023)
Aiming at the problem that the federated learning system was extremely vulnerable to membership inference attacks initiated by malicious parties in the prediction stage, and the existing defense methods were difficult to achieve a balance between pri
Externí odkaz:
https://doaj.org/article/caeef1b5bf0045a3ab74bd54458317e9
Publikováno v:
PeerJ Computer Science, Vol 9, p e1616 (2023)
The extraordinary success of deep learning is made possible due to the availability of crowd-sourced large-scale training datasets. Mostly, these datasets contain personal and confidential information, thus, have great potential of being misused, rai
Externí odkaz:
https://doaj.org/article/b3f5a1ac09134d629a174e1111fc772f
Publikováno v:
IEEE Access, Vol 11, Pp 42796-42808 (2023)
While significant research advances have been made in the field of deep reinforcement learning, there have been no concrete adversarial attack strategies in literature tailored for studying the vulnerability of deep reinforcement learning algorithms
Externí odkaz:
https://doaj.org/article/d58af4099b024a45a16da109243a9b68
Publikováno v:
Jisuanji kexue, Vol 50, Iss 1, Pp 302-317 (2023)
Artificial intelligence has been integrated into all aspects of people's daily lives with the continuous development of machine learning,especially in the deep learning area.Machine learning models are deployed in various applications,enhancing the i
Externí odkaz:
https://doaj.org/article/199e17464e2e4f809d34eab509859704
Publikováno v:
Journal of Cybersecurity and Privacy, Vol 2, Iss 4, Pp 882-906 (2022)
Recent efforts have shown that training data is not secured through the generalization and abstraction of algorithms. This vulnerability to the training data has been expressed through membership inference attacks that seek to discover the use of spe
Externí odkaz:
https://doaj.org/article/2b0a0cf1a90c4aabb98704bc499d4c8e