Zobrazeno 1 - 10
of 5 048
pro vyhledávání: '"XIAO, Shan"'
Publikováno v:
AAAI 2025
The Kolmogorov-Arnold Network (KAN) is a new network architecture known for its high accuracy in several tasks such as function fitting and PDE solving. The superior expressive capability of KAN arises from the Kolmogorov-Arnold representation theore
Externí odkaz:
http://arxiv.org/abs/2412.13571
The neural network memorization problem is to study the expressive power of neural networks to interpolate a finite dataset. Although memorization is widely believed to have a close relationship with the strong generalizability of deep learning when
Externí odkaz:
http://arxiv.org/abs/2411.00372
The recent development of Sora leads to a new era in text-to-video (T2V) generation. Along with this comes the rising concern about its security risks. The generated videos may contain illegal or unethical content, and there is a lack of comprehensiv
Externí odkaz:
http://arxiv.org/abs/2407.05965
Despite the great progress of 3D vision, data privacy and security issues in 3D deep learning are not explored systematically. In the domain of 2D images, many availability attacks have been proposed to prevent data from being illicitly learned by un
Externí odkaz:
http://arxiv.org/abs/2407.11011
Autor:
Wang, Yihan, Lu, Yiwei, Zhang, Guojun, Boenisch, Franziska, Dziedzic, Adam, Yu, Yaoliang, Gao, Xiao-Shan
Machine unlearning provides viable solutions to revoke the effect of certain training data on pre-trained model parameters. Existing approaches provide unlearning recipes for classification and generative models. However, a category of important mach
Externí odkaz:
http://arxiv.org/abs/2406.03603
The generalization bound is a crucial theoretical tool for assessing the generalizability of learning methods and there exist vast literatures on generalizability of normal learning, adversarial learning, and data poisoning. Unlike other data poison
Externí odkaz:
http://arxiv.org/abs/2406.00588
This paper studies the challenging black-box adversarial attack that aims to generate adversarial examples against a black-box model by only using output feedback of the model to input queries. Some previous methods improve the query efficiency by in
Externí odkaz:
http://arxiv.org/abs/2405.19098
Availability attacks can prevent the unauthorized use of private data and commercial datasets by generating imperceptible noise and making unlearnable examples before release. Ideally, the obtained unlearnability prevents algorithms from training usa
Externí odkaz:
http://arxiv.org/abs/2402.04010
Unlearnable example attacks are data poisoning attacks aiming to degrade the clean test accuracy of deep learning by adding imperceptible perturbations to the training samples, which can be formulated as a bi-level optimization problem. However, dire
Externí odkaz:
http://arxiv.org/abs/2401.17523
The proof of information inequalities and identities under linear constraints on the information measures is an important problem in information theory. For this purpose, ITIP and other variant algorithms have been developed and implemented, which ar
Externí odkaz:
http://arxiv.org/abs/2401.14916