Zobrazeno 1 - 10
of 30
pro vyhledávání: '"Jia, Jinghan"'
The need for effective unlearning mechanisms in large language models (LLMs) is increasingly urgent, driven by the necessity to adhere to data regulations and foster ethical generative AI practices. Despite growing interest of LLM unlearning, much of
Externí odkaz:
http://arxiv.org/abs/2410.17509
In this work, we address the problem of large language model (LLM) unlearning, aiming to remove unwanted data influences and associated model capabilities (e.g., copyrighted data or harmful content generation) while preserving essential model utiliti
Externí odkaz:
http://arxiv.org/abs/2410.07163
Autor:
Jia, Jinghan, Komma, Abi, Leffel, Timothy, Peng, Xujun, Nagesh, Ajay, Soliman, Tamer, Galstyan, Aram, Kumar, Anoop
In task-oriented conversational AI evaluation, unsupervised methods poorly correlate with human judgments, and supervised approaches lack generalization. Recent advances in large language models (LLMs) show robust zeroshot and few-shot capabilities a
Externí odkaz:
http://arxiv.org/abs/2406.17304
Autor:
Di, Zonglin, Zhu, Zhaowei, Jia, Jinghan, Liu, Jiancheng, Takhirov, Zafar, Jiang, Bo, Yao, Yuanshun, Liu, Sijia, Liu, Yang
The objective of machine unlearning (MU) is to eliminate previously learned data from a model. However, it is challenging to strike a balance between computation cost and performance when using existing MU techniques. Taking inspiration from the infl
Externí odkaz:
http://arxiv.org/abs/2406.07698
Autor:
Zhang, Yimeng, Chen, Xin, Jia, Jinghan, Zhang, Yihua, Fan, Chongyu, Liu, Jiancheng, Hong, Mingyi, Ding, Ke, Liu, Sijia
Diffusion models (DMs) have achieved remarkable success in text-to-image generation, but they also pose safety risks, such as the potential generation of harmful content and copyright violations. The techniques of machine unlearning, also known as co
Externí odkaz:
http://arxiv.org/abs/2405.15234
Autor:
Jia, Jinghan, Zhang, Yihua, Zhang, Yimeng, Liu, Jiancheng, Runwal, Bharat, Diffenderfer, James, Kailkhura, Bhavya, Liu, Sijia
Large Language Models (LLMs) have highlighted the necessity of effective unlearning mechanisms to comply with data regulations and ethical AI practices. LLM unlearning aims at removing undesired data influences and associated model capabilities witho
Externí odkaz:
http://arxiv.org/abs/2404.18239
UnlearnCanvas: Stylized Image Dataset for Enhanced Machine Unlearning Evaluation in Diffusion Models
Autor:
Zhang, Yihua, Fan, Chongyu, Zhang, Yimeng, Yao, Yuguang, Jia, Jinghan, Liu, Jiancheng, Zhang, Gaoyuan, Liu, Gaowen, Kompella, Ramana Rao, Liu, Xiaoming, Liu, Sijia
The technological advancements in diffusion models (DMs) have demonstrated unprecedented capabilities in text-to-image generation and are widely used in diverse applications. However, they have also raised significant societal concerns, such as the g
Externí odkaz:
http://arxiv.org/abs/2402.11846
Autor:
Liu, Sijia, Yao, Yuanshun, Jia, Jinghan, Casper, Stephen, Baracaldo, Nathalie, Hase, Peter, Yao, Yuguang, Liu, Chris Yuhao, Xu, Xiaojun, Li, Hang, Varshney, Kush R., Bansal, Mohit, Koyejo, Sanmi, Liu, Yang
We explore machine unlearning (MU) in the domain of large language models (LLMs), referred to as LLM unlearning. This initiative aims to eliminate undesirable data influence (e.g., sensitive or illegal information) and the associated model capabiliti
Externí odkaz:
http://arxiv.org/abs/2402.08787
Autor:
Liang, Shijun, Nguyen, Van Hoang Minh, Jia, Jinghan, Alkhouri, Ismail, Liu, Sijia, Ravishankar, Saiprasad
As the popularity of deep learning (DL) in the field of magnetic resonance imaging (MRI) continues to rise, recent research has indicated that DL-based MRI reconstruction models might be excessively sensitive to minor input disturbances, including wo
Externí odkaz:
http://arxiv.org/abs/2312.07784
Autor:
Zhang, Yimeng, Jia, Jinghan, Chen, Xin, Chen, Aochuan, Zhang, Yihua, Liu, Jiancheng, Ding, Ke, Liu, Sijia
The recent advances in diffusion models (DMs) have revolutionized the generation of realistic and complex images. However, these models also introduce potential safety hazards, such as producing harmful content and infringing data copyrights. Despite
Externí odkaz:
http://arxiv.org/abs/2310.11868