Zobrazeno 1 - 10
of 17
pro vyhledávání: '"He, Zhengbao"'
Machine unlearning (MU) is to make a well-trained model behave as if it had never been trained on specific data. In today's over-parameterized models, dominated by neural networks, a common approach is to manually relabel data and fine-tune the well-
Externí odkaz:
http://arxiv.org/abs/2410.08557
Autor:
Huang, Zhehao, Cheng, Xinwen, Zheng, JingHao, Wang, Haoran, He, Zhengbao, Li, Tao, Huang, Xiaolin
Machine unlearning (MU) has emerged to enhance the privacy and trustworthiness of deep neural networks. Approximate MU is a practical method for large-scale models. Our investigation into approximate MU starts with identifying the steepest descent di
Externí odkaz:
http://arxiv.org/abs/2409.19732
Fine-tuning large-scale pre-trained models is prohibitively expensive in terms of computational and memory costs. Low-Rank Adaptation (LoRA), a popular Parameter-Efficient Fine-Tuning (PEFT) method, provides an efficient way to fine-tune models by op
Externí odkaz:
http://arxiv.org/abs/2409.14396
In the open world, detecting out-of-distribution (OOD) data, whose labels are disjoint with those of in-distribution (ID) samples, is important for reliable deep neural networks (DNNs). To achieve better detection performance, one type of approach pr
Externí odkaz:
http://arxiv.org/abs/2405.17816
Machine unlearning (MU) aims to eliminate information that has been learned from specific training data, namely forgetting data, from a pre-trained model. Currently, the mainstream of existing MU methods involves modifying the forgetting data with in
Externí odkaz:
http://arxiv.org/abs/2405.15495
Sharpness-Aware Minimization (SAM) has been instrumental in improving deep neural network training by minimizing both training loss and loss sharpness. Despite the practical success, the mechanisms behind SAM's generalization enhancements remain elus
Externí odkaz:
http://arxiv.org/abs/2403.12350
Although fast adversarial training provides an efficient approach for building robust networks, it may suffer from a serious problem known as catastrophic overfitting (CO), where multi-step robust accuracy suddenly collapses to zero. In this paper, w
Externí odkaz:
http://arxiv.org/abs/2302.11963
Autor:
Li, Tao, Huang, Zhehao, Wu, Yingwen, He, Zhengbao, Tao, Qinghua, Huang, Xiaolin, Lin, Chih-Jen
Training deep neural networks (DNNs) in low-dimensional subspaces is a promising direction for achieving efficient training and better generalization performance. Our previous work extracts the subspaces by performing the dimension reduction method o
Externí odkaz:
http://arxiv.org/abs/2205.13104
Adversarial attacks on deep neural networks (DNNs) have been found for several years. However, the existing adversarial attacks have high success rates only when the information of the victim DNN is well-known or could be estimated by the structure s
Externí odkaz:
http://arxiv.org/abs/2001.06325
It is now well known that deep neural networks (DNNs) are vulnerable to adversarial attack. Adversarial samples are similar to the clean ones, but are able to cheat the attacked DNN to produce incorrect predictions in high confidence. But most of the
Externí odkaz:
http://arxiv.org/abs/1912.07160