Zobrazeno 1 - 10
of 373
pro vyhledávání: '"Wang Cheng-long"'
In this work, we systematically explore the data privacy issues of dataset pruning in machine learning systems. Our findings reveal, for the first time, that even if data in the redundant set is solely used before model training, its pruning-phase me
Externí odkaz:
http://arxiv.org/abs/2411.15796
Autor:
Sun, Guangyan, Jin, Mingyu, Wang, Zhenting, Wang, Cheng-Long, Ma, Siqi, Wang, Qifan, Wu, Ying Nian, Zhang, Yongfeng, Liu, Dongfang
Achieving human-level intelligence requires refining cognitive distinctions between System 1 and System 2 thinking. While contemporary AI, driven by large language models, demonstrates human-like traits, it falls short of genuine cognition. Transitio
Externí odkaz:
http://arxiv.org/abs/2408.08862
Autor:
Hu, Lijie, Ren, Chenyang, Hu, Zhengyu, Lin, Hongbin, Wang, Cheng-Long, Xiong, Hui, Zhang, Jingfeng, Wang, Di
Concept Bottleneck Models (CBMs) have garnered much attention for their ability to elucidate the prediction process through a human-understandable concept layer. However, most previous studies focused on cases where the data, including concepts, are
Externí odkaz:
http://arxiv.org/abs/2405.15476
By adopting a more flexible definition of unlearning and adjusting the model distribution to simulate training without the targeted data, approximate machine unlearning provides a less resource-demanding alternative to the more laborious exact unlear
Externí odkaz:
http://arxiv.org/abs/2403.12830
Adapting large language models (LLMs) to new domains/tasks and enabling them to be efficient lifelong learners is a pivotal challenge. In this paper, we propose MoRAL, i.e., Mixture-of-Experts augmented Low-Rank Adaptation for Lifelong Learning. MoRA
Externí odkaz:
http://arxiv.org/abs/2402.11260
We study federated unlearning, a novel problem to eliminate the impact of specific clients or data points on the global model learned via federated learning (FL). This problem is driven by the right to be forgotten and the privacy challenges in FL. W
Externí odkaz:
http://arxiv.org/abs/2401.11018
This paper focuses on the problem of Differentially Private Stochastic Optimization for (multi-layer) fully connected neural networks with a single output node. In the first part, we examine cases with no hidden nodes, specifically focusing on Genera
Externí odkaz:
http://arxiv.org/abs/2310.08425
As a way to implement the "right to be forgotten" in machine learning, \textit{machine unlearning} aims to completely remove the contributions and information of the samples to be deleted from a trained model without affecting the contributions of ot
Externí odkaz:
http://arxiv.org/abs/2304.03093
In this paper, we propose a uniformly dithered 1-bit quantization scheme for high-dimensional statistical estimation. The scheme contains truncation, dithering, and quantization as typical steps. As canonical examples, the quantization scheme is appl
Externí odkaz:
http://arxiv.org/abs/2202.13157
Publikováno v:
In Renewable Energy August 2024 229