Zobrazeno 1 - 10
of 462
pro vyhledávání: '"Li, Yuyuan"'
Autor:
Han, Zhongxuan, Zhang, Li, Chen, Chaochao, Zheng, Xiaolin, Zheng, Fei, Li, Yuyuan, Yin, Jianwei
Federated Learning (FL) employs a training approach to address scenarios where users' data cannot be shared across clients. Achieving fairness in FL is imperative since training data in FL is inherently geographically distributed among diverse user g
Externí odkaz:
http://arxiv.org/abs/2411.06881
Autor:
Chen, Chaochao, Zhang, Jiaming, Zhang, Yizhao, Zhang, Li, Lyu, Lingjuan, Li, Yuyuan, Gong, Biao, Yan, Chenggang
With increasing privacy concerns in artificial intelligence, regulations have mandated the right to be forgotten, granting individuals the right to withdraw their data from models. Machine unlearning has emerged as a potential solution to enable sele
Externí odkaz:
http://arxiv.org/abs/2408.14393
While generative models have made significant advancements in recent years, they also raise concerns such as privacy breaches and biases. Machine unlearning has emerged as a viable solution, aiming to remove specific training data, e.g., containing p
Externí odkaz:
http://arxiv.org/abs/2408.01689
Autor:
Chen, Chaochao, Zhang, Yizhao, Li, Yuyuan, Wang, Jun, Qi, Lianyong, Xu, Xiaolong, Zheng, Xiaolin, Yin, Jianwei
With the growing privacy concerns in recommender systems, recommendation unlearning is getting increasing attention. Existing studies predominantly use training data, i.e., model inputs, as unlearning target. However, attackers can extract private in
Externí odkaz:
http://arxiv.org/abs/2403.06737
Diffusion models have recently achieved remarkable progress in generating realistic images. However, challenges remain in accurately understanding and synthesizing the layout requirements in the textual prompts. To align the generated image with layo
Externí odkaz:
http://arxiv.org/abs/2311.15773
Autor:
Li, Yuyuan, Chen, Chaochao, Zheng, Xiaolin, Zhang, Yizhao, Han, Zhongxuan, Meng, Dan, Wang, Jun
Publikováno v:
Proceedings of the 31st ACM International Conference on Multimedia (MM '23), October 29--November 3, 2023, Ottawa, ON, Canada
With the growing privacy concerns in recommender systems, recommendation unlearning, i.e., forgetting the impact of specific learned targets, is getting increasing attention. Existing studies predominantly use training data, i.e., model inputs, as th
Externí odkaz:
http://arxiv.org/abs/2310.05847
Autor:
Han, Zhongxuan, Chen, Chaochao, Zheng, Xiaolin, Liu, Weiming, Wang, Jun, Cheng, Wenjie, Li, Yuyuan
Recommender systems are typically biased toward a small group of users, leading to severe unfairness in recommendation performance, i.e., User-Oriented Fairness (UOF) issue. The existing research on UOF is limited and fails to deal with the root caus
Externí odkaz:
http://arxiv.org/abs/2309.01335
Autor:
Chen, Chaochao, Feng, Xiaohua, Li, Yuyuan, Lyu, Lingjuan, Zhou, Jun, Zheng, Xiaolin, Yin, Jianwei
As the parameter size of Large Language Models (LLMs) continues to expand, there is an urgent need to address the scarcity of high-quality data. In response, existing research has attempted to make a breakthrough by incorporating Federated Learning (
Externí odkaz:
http://arxiv.org/abs/2307.08925
The increasing concerns regarding the privacy of machine learning models have catalyzed the exploration of machine unlearning, i.e., a process that removes the influence of training data on machine learning models. This concern also arises in the rea
Externí odkaz:
http://arxiv.org/abs/2307.03363
Recent regulations on the Right to be Forgotten have greatly influenced the way of running a recommender system, because users now have the right to withdraw their private data. Besides simply deleting the target data in the database, unlearning the
Externí odkaz:
http://arxiv.org/abs/2304.10199