Zobrazeno 1 - 10
of 247
pro vyhledávání: '"Yu Lijia"'
In recent years, the study of adversarial robustness in object detection systems, particularly those based on deep neural networks (DNNs), has become a pivotal area of research. Traditional physical attacks targeting object detectors, such as adversa
Externí odkaz:
http://arxiv.org/abs/2410.10091
The recent development of Sora leads to a new era in text-to-video (T2V) generation. Along with this comes the rising concern about its security risks. The generated videos may contain illegal or unethical content, and there is a lack of comprehensiv
Externí odkaz:
http://arxiv.org/abs/2407.05965
The generalization bound is a crucial theoretical tool for assessing the generalizability of learning methods and there exist vast literatures on generalizability of normal learning, adversarial learning, and data poisoning. Unlike other data poison
Externí odkaz:
http://arxiv.org/abs/2406.00588
Privacy preserving has become increasingly critical with the emergence of social media. Unlearnable examples have been proposed to avoid leaking personal information on the Internet by degrading generalization abilities of deep learning models. Howev
Externí odkaz:
http://arxiv.org/abs/2312.08898
Invariance to spatial transformations such as translations and rotations is a desirable property and a basic design principle for classification neural networks. However, the commonly used convolutional neural networks (CNNs) are actually very sensit
Externí odkaz:
http://arxiv.org/abs/2306.16938
Publikováno v:
MATEC Web of Conferences, Vol 277, p 02007 (2019)
With the extensive application of deep learning in the field of human rehabilitation, skeleton based rehabilitation recognition is becoming more and more concerned with large-scale bone data sets. The key factor of this task is the two intra frame re
Externí odkaz:
https://doaj.org/article/c8082d6132574165a72e1697342661cf
Adversarial deep learning is to train robust DNNs against adversarial attacks, which is one of the major research focuses of deep learning. Game theory has been used to answer some of the basic questions about adversarial deep learning such as the ex
Externí odkaz:
http://arxiv.org/abs/2207.08137
In this paper, a new parameter perturbation attack on DNNs, called adversarial parameter attack, is proposed, in which small perturbations to the parameters of the DNN are made such that the accuracy of the attacked DNN does not decrease much, but it
Externí odkaz:
http://arxiv.org/abs/2203.10502
Autor:
Yu, Lijia, Gao, Xiao-Shan
In this paper, the bias classifier is introduced, that is, the bias part of a DNN with Relu as the activation function is used as a classifier. The work is motivated by the fact that the bias part is a piecewise constant function with zero gradient a
Externí odkaz:
http://arxiv.org/abs/2111.04404
Autor:
Yu, Lijia, Gao, Xiao-Shan
In this paper, a robust classification-autoencoder (CAE) is proposed, which has strong ability to recognize outliers and defend adversaries. The main idea is to change the autoencoder from an unsupervised learning model into a classifier, where the e
Externí odkaz:
http://arxiv.org/abs/2106.15927