Zobrazeno 1 - 10
of 4 030
pro vyhledávání: '"Hyun Kwon"'
Publikováno v:
IEEE Access, Vol 12, Pp 173010-173019 (2024)
In this paper, we propose a method for creating a hidden voice that is perceived as silence by a human. The proposed method creates a silent hidden voice that is mistakenly classified as a target phrase by the target model; it does this by configurin
Externí odkaz:
https://doaj.org/article/03cf19a3e1964d8e96e29f698bb9ce28
Autor:
Hyun Kwon, Jang-Woon Baek
Publikováno v:
IEEE Access, Vol 12, Pp 170688-170698 (2024)
Deep neural networks exhibit excellent image, voice, text, and pattern recognition performance. However, they are vulnerable to adversarial and backdoor attacks. In a backdoor attack, the target model identifies input data unless it contains a specif
Externí odkaz:
https://doaj.org/article/2fa2bc1f93454e058afa8dcce04fbf32
Publikováno v:
IEEE Access, Vol 12, Pp 5345-5356 (2024)
Deep neural networks (DNNs) provide excellent performance in image recognition, speech recognition, video recognition, and pattern analysis. However, they are vulnerable to adversarial example attacks. An adversarial example, which is input to which
Externí odkaz:
https://doaj.org/article/1c5ad8d8a490421dbba836958edb9d23
Publikováno v:
Proceedings, Vol 104, Iss 1, p 30 (2024)
This study presents a novel data-driven modeling approach employing machine learning to develop predictive “soft sensors” for real-time monitoring of ethanol and substrate levels during bioethanol fermentation processes. By utilizing readily meas
Externí odkaz:
https://doaj.org/article/d5d7bdc4a4504d8fb478c6a4d0b65851
Publikováno v:
IEEE Access, Vol 11, Pp 16984-16993 (2023)
Face anti-spoofing (FAS) is a technology that protects face recognition systems from presentation attacks. The current challenge faced by FAS studies is the difficulty in creating a generalized light variation model. This is because face data are sen
Externí odkaz:
https://doaj.org/article/f598ba821cd74de5a37851554f4250af
Autor:
Hyun Kwon
Publikováno v:
IEEE Access, Vol 11, Pp 15164-15173 (2023)
Deep neural networks provide good performance on classification tasks such as those for image, audio, and text classification. However, such neural networks are vulnerable to adversarial examples. An adversarial example is a sample created by adding
Externí odkaz:
https://doaj.org/article/3f8597211686401f89a820a785558aa9
Publikováno v:
Agronomy, Vol 14, Iss 3, p 417 (2024)
In greenhouses, plant growth is directly influenced by internal environmental conditions, and therefore requires continuous management and proper environmental control. Inadequate environmental conditions make plants vulnerable to pests and diseases,
Externí odkaz:
https://doaj.org/article/2afa70001dd240bf9e59642221576e62
Publikováno v:
IEEE Access, Vol 10, Pp 35804-35813 (2022)
Neural networks provide excellent service on recognition tasks such as image recognition and speech recognition as well as for pattern analysis and other tasks in fields related to artificial intelligence. However, neural networks are vulnerable to a
Externí odkaz:
https://doaj.org/article/3ddef268e7414cc09740b23be9f19310
Autor:
Hyun Kwon
Publikováno v:
IEEE Access, Vol 8, Pp 191049-191056 (2020)
A backdoor attack implies that deep neural networks misrecognize data that have a specific trigger by additionally training the malicious training data, including the specific trigger to the deep neural network model. In this method, the deep neural
Externí odkaz:
https://doaj.org/article/2d94c63c00ba4f4b8b143a8c33024653
Publikováno v:
IEEE Access, Vol 7, Pp 60908-60919 (2019)
Deep neural networks (DNNs) show superior performance in image and speech recognition. However, adversarial examples created by adding a little noise to an original sample can lead to misclassification by a DNN. Conventional studies on adversarial ex
Externí odkaz:
https://doaj.org/article/18e15e9596274820aa6894a854aac8f4