Popis: |
To effectively protect data privacy and implement the “right to be forgotten”, it is necessary to eliminate the influence of specific subsets of training data from machine learning models and ensure that these data cannot be reverse-engineered. To address this issue, the research field of “machine unlearning” has emerged in recent years. This paper reviews the progress in machine unlearning research from three aspects: definitions, metrics, and algorithms. Firstly, it systematically outlines the core concepts, definitions, and evaluation metrics of machine unlearning, emphasizing the critical significance of certifiability metrics. Secondly, it categorizes unlearning algorithms into six major classes based on their design principles: structured initial training, influence functions approximate, gradient updates, noise unlearning, knowledge distillation unlearning, and boundary unlearning. It provides detailed descriptions of nine representative machine unlearning algorithms and their evolution. Based on a comparison of existing algorithms’ strengths and weaknesses, this paper discusses the potential and significance of constructing a unified framework for machine unlearning based on certification, and analyzes the theoretical and practical relationships between machine unlearning research and privacy protection. Finally, this paper outlines future research directions for machine unlearning, including the need to extend unlearning algorithms to subfields such as fair machine learning, transfer learning, and reinforcement learning; the potential for integrating various design approaches into future unlearning algorithms; the need for collaboration between technology and regulation in unlearning practices; and the benefits of integrating machine unlearning with incremental learning to improve the management and operation efficiency of machine learning models. |