Zobrazeno 1 - 10
of 533
pro vyhledávání: '"Liu, Jiancheng ."'
The need for effective unlearning mechanisms in large language models (LLMs) is increasingly urgent, driven by the necessity to adhere to data regulations and foster ethical generative AI practices. Despite growing interest of LLM unlearning, much of
Externí odkaz:
http://arxiv.org/abs/2410.17509
In this work, we address the problem of large language model (LLM) unlearning, aiming to remove unwanted data influences and associated model capabilities (e.g., copyrighted data or harmful content generation) while preserving essential model utiliti
Externí odkaz:
http://arxiv.org/abs/2410.07163
Autor:
Zhang, Mingrui, Wang, Chunyang, Kramer, Stephan, Wallwork, Joseph G., Li, Siyi, Liu, Jiancheng, Chen, Xiang, Piggott, Matthew D.
Solving complex Partial Differential Equations (PDEs) accurately and efficiently is an essential and challenging problem in all scientific and engineering disciplines. Mesh movement methods provide the capability to improve the accuracy of the numeri
Externí odkaz:
http://arxiv.org/abs/2407.00382
Autor:
Di, Zonglin, Zhu, Zhaowei, Jia, Jinghan, Liu, Jiancheng, Takhirov, Zafar, Jiang, Bo, Yao, Yuanshun, Liu, Sijia, Liu, Yang
The objective of machine unlearning (MU) is to eliminate previously learned data from a model. However, it is challenging to strike a balance between computation cost and performance when using existing MU techniques. Taking inspiration from the infl
Externí odkaz:
http://arxiv.org/abs/2406.07698
Autor:
Zhang, Yimeng, Chen, Xin, Jia, Jinghan, Zhang, Yihua, Fan, Chongyu, Liu, Jiancheng, Hong, Mingyi, Ding, Ke, Liu, Sijia
Diffusion models (DMs) have achieved remarkable success in text-to-image generation, but they also pose safety risks, such as the potential generation of harmful content and copyright violations. The techniques of machine unlearning, also known as co
Externí odkaz:
http://arxiv.org/abs/2405.15234
Autor:
Jia, Jinghan, Zhang, Yihua, Zhang, Yimeng, Liu, Jiancheng, Runwal, Bharat, Diffenderfer, James, Kailkhura, Bhavya, Liu, Sijia
Large Language Models (LLMs) have highlighted the necessity of effective unlearning mechanisms to comply with data regulations and ethical AI practices. LLM unlearning aims at removing undesired data influences and associated model capabilities witho
Externí odkaz:
http://arxiv.org/abs/2404.18239
The trustworthy machine learning (ML) community is increasingly recognizing the crucial need for models capable of selectively 'unlearning' data points after training. This leads to the problem of machine unlearning (MU), aiming to eliminate the infl
Externí odkaz:
http://arxiv.org/abs/2403.07362
In this paper, we prove that PMCV (i.e. \Delta\vec{H} is proportional to \vec{H}) hypersurface M^n_r of a non-flat pseudo-Riemannian space form N^{n+1}_s(c) with at most two distinct principal curvatures is minimal or locally isoparametric, and compu
Externí odkaz:
http://arxiv.org/abs/2403.08205
UnlearnCanvas: Stylized Image Dataset for Enhanced Machine Unlearning Evaluation in Diffusion Models
Autor:
Zhang, Yihua, Fan, Chongyu, Zhang, Yimeng, Yao, Yuguang, Jia, Jinghan, Liu, Jiancheng, Zhang, Gaoyuan, Liu, Gaowen, Kompella, Ramana Rao, Liu, Xiaoming, Liu, Sijia
The technological advancements in diffusion models (DMs) have demonstrated unprecedented capabilities in text-to-image generation and are widely used in diverse applications. However, they have also raised significant societal concerns, such as the g
Externí odkaz:
http://arxiv.org/abs/2402.11846