A Survey of Machine Unlearning
Awesome Machine Unlearning
I. Introduction
Today, computer systems hold large amounts of personal data. Yet while such an abundance of data allows breakthroughs in artificial intelligence, and especially machine learning (ML), its existence can be a threat to user privacy, and it can weaken the bonds of trust between humans and AI. Recent regulations now require that, on request, private information about a user must be removed from both computer systems and from ML models, i.e. "the right to be forgotten"). While removing data from back-end databases should be straightforward, it is not sufficient in the AI context as ML models often `remember' the old data. Contemporary adversarial attacks on trained models have proven that we can learn whether an instance or an attribute belonged to the training data. This phenomenon calls for a new paradigm, namely machine unlearning, to make ML models forget about particular data. It turns out that recent works on machine unlearning have not been able to completely solve the problem due to the lack of common frameworks and resources. Therefore, this paper aspires to present a comprehensive examination of machine unlearning's concepts, scenarios, methods, and applications. Specifically, as a category collection of cutting-edge studies, the intention behind this article is to serve as a comprehensive resource for researchers and practitioners seeking an introduction to machine unlearning and its formulations, design criteria, removal requests, algorithms, and applications. In addition, we aim to highlight the key findings, current trends, and new research areas that have not yet featured the use of machine unlearning but could benefit greatly from it. We hope this survey serves as a valuable resource for ML researchers and those seeking to innovate privacy technologies.
II. List of Approaches (Sortable)
Total number of rows: 190
Title Venue Year Code Type
UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models arXiv 2024 [Code] Model-Agnostic
On the Trade-Off between Actionable Explanations and the Right to be Forgotten arXiv 2024 - Model-Agnostic
Post-Training Attribute Unlearning in Recommender Systems arXiv 2024 - Model-Agnostic
Making Users Indistinguishable: Attribute-wise Unlearning in Recommender Systems arXiv 2024 - Model-Agnostic
CovarNav: Machine Unlearning via Model Inversion and Covariance Navigation arXiv 2024 - Model-Agnostic
Partially Blinded Unlearning: Class Unlearning for Deep Networks a Bayesian Perspective arXiv 2024 - Model-Agnostic
Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning arXiv 2024 - Model-Agnostic
∇τ: Gradient-based and Task-Agnostic machine Unlearning arXiv 2024 - Model-Agnostic
Towards Independence Criterion in Machine Unlearning of Features and Labels arXiv 2024 - Model-Agnostic
Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning arXiv 2024 [Code] Model-Agnostic
Corrective Machine Unlearning arXiv 2024 - Model-Agnostic
Fair Machine Unlearning: Data Removal while Mitigating Disparities - 2024 [Code] Model-Agnostic
Label-Agnostic Forgetting: A Supervision-Free Unlearning in Deep Models arXiv 2024 [Code] Model-Agnostic
CaMU: Disentangling Causal Effects in Deep Model Unlearning arXiv 2024 [Code] Model-Agnostic
SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation ICLR 2024 [Code] Model-Agnostic
Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening AAAI 2024 [Code] Model-Agnostic
Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers AAAI 2024 - Model-Agnostic
Parameter-tuning-free data entry error unlearning with adaptive selective synaptic dampening arXiv 2024 [Code] Model-Agnostic
Zero-Shot Machine Unlearning at Scale via Lipschitz Regularization arXiv 2024 [Code] Model-Agnostic
Layer Attack Unlearning: Fast and Accurate Machine Unlearning via Layer Level Attack and Knowledge Distillation arXiv 2023 - Model-Agnostic
Towards bridging the gaps between the right to explanation and the right to be forgotten - 2023 - Model-Agnostic
Unlearn What You Want to Forget: Efficient Unlearning for LLMs EMNLP 2023 [Code] Model-Agnostic
Fast Model DeBias with Machine Unlearning NIPS 2023 [Code] Model-Agnostic
DUCK: Distance-based Unlearning via Centroid Kinematics arXiv 2023 [Code] Model-Agnostic
Open Knowledge Base Canonicalization with Multi-task Unlearning arXiv 2023 - Model-Agnostic
Unlearning via Sparse Representations arXiv 2023 - Model-Agnostic
SecureCut: Federated Gradient Boosting Decision Trees with Efficient Machine Unlearning arXiv 2023 - Model-Agnostic
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks NeurIPS 2023 - Model-Agnostic
Model Sparsity Can Simplify Machine Unlearning NeurIPS 2023 [Code] Model-Agnostic
Fast Model Debias with Machine Unlearning arXiv 2023 - Model-Agnostic
FRAMU: Attention-based Machine Unlearning using Federated Reinforcement Learning arXiv 2023 - Model-Agnostic
Tight Bounds for Machine Unlearning via Differential Privacy arXiv 2023 - Model-Agnostic
Machine Unlearning Methodology base on Stochastic Teacher Network arXiv 2023 - Model-Agnostic
Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening arXiv 2023 [Code] Model-Agnostic
From Adaptive Query Release to Machine Unlearning arXiv 2023 - Model-Agnostic
Towards Adversarial Evaluations for Inexact Machine Unlearning arXiv 2023 [Code] Model-Agnostic
KGA: A General Machine Unlearning Framework Based on Knowledge Gap Alignment arXiv 2023 [Code] Model-Agnostic
On the Trade-Off between Actionable Explanations and the Right to be Forgotten arXiv 2023 - Model-Agnostic
Towards Unbounded Machine Unlearning arXiv 2023 [Code] Model-Agnostic
Netflix and Forget: Efficient and Exact Machine Unlearning from Bi-linear Recommendations arXiv 2023 - Model-Agnostic
To Be Forgotten or To Be Fair: Unveiling Fairness Implications of Machine Unlearning Methods arXiv 2023 [Code] Model-Agnostic
Sequential Informed Federated Unlearning: Efficient and Provable Client Unlearning in Federated Optimization arXiv 2022 - Model-Agnostic
Certified Data Removal in Sum-Product Networks ICKG 2022 [Code] Model-Agnostic
Learning with Recoverable Forgetting ECCV 2022 - Model-Agnostic
Continual Learning and Private Unlearning CoLLAs 2022 [Code] Model-Agnostic
Verifiable and Provably Secure Machine Unlearning arXiv 2022 [Code] Model-Agnostic
VeriFi: Towards Verifiable Federated Unlearning arXiv 2022 - Model-Agnostic
FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information S&P 2022 - Model-Agnostic
Fast Yet Effective Machine Unlearning arXiv 2022 - Model-Agnostic
Membership Inference via Backdooring IJCAI 2022 [Code] Model-Agnostic
Forget Unlearning: Towards True Data-Deletion in Machine Learning ICLR 2022 - Model-Agnostic
Zero-Shot Machine Unlearning arXiv 2022 - Model-Agnostic
Efficient Attribute Unlearning: Towards Selective Removal of Input Attributes from Feature Representations arXiv 2022 - Model-Agnostic
Few-Shot Unlearning ICLR 2022 - Model-Agnostic
Federated Unlearning: How to Efficiently Erase a Client in FL? UpML Workshop 2022 - Model-Agnostic
Machine Unlearning Method Based On Projection Residual DSAA 2022 - Model-Agnostic
Hard to Forget: Poisoning Attacks on Certified Machine Unlearning AAAI 2022 [Code] Model-Agnostic
Athena: Probabilistic Verification of Machine Unlearning PoPETs 2022 - Model-Agnostic
FP2-MIA: A Membership Inference Attack Free of Posterior Probability in Machine Unlearning ProvSec 2022 - Model-Agnostic
Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning PETS 2022 - Model-Agnostic
Prompt Certified Machine Unlearning with Randomized Gradient Smoothing and Quantization NeurIPS 2022 - Model-Agnostic
The Right to be Forgotten in Federated Learning: An Efficient Realization with Rapid Retraining INFOCOM 2022 [Code] Model-Agnostic
Backdoor Defense with Machine Unlearning INFOCOM 2022 - Model-Agnostic
Markov Chain Monte Carlo-Based Machine Unlearning: Unlearning What Needs to be Forgotten ASIA CCS 2022 - Model-Agnostic
Federated Unlearning for On-Device Recommendation arXiv 2022 - Model-Agnostic
Can Bad Teaching Induce Forgetting? Unlearning in Deep Networks using an Incompetent Teacher arXiv 2022 - Model-Agnostic
Efficient Two-Stage Model Retraining for Machine Unlearning CVPR Workshop 2022 - Model-Agnostic
Learn to Forget: Machine Unlearning Via Neuron Masking IEEE 2021 - Model-Agnostic
Adaptive Machine Unlearning NeurIPS 2021 [Code] Model-Agnostic
Descent-to-Delete: Gradient-Based Methods for Machine Unlearning ALT 2021 - Model-Agnostic
Remember What You Want to Forget: Algorithms for Machine Unlearning NeurIPS 2021 - Model-Agnostic
FedEraser: Enabling Efficient Client-Level Data Removal from Federated Learning Models IWQoS 2021 - Model-Agnostic
Federated Unlearning IWQoS 2021 [Code] Model-Agnostic
Machine Unlearning via Algorithmic Stability COLT 2021 - Model-Agnostic
EMA: Auditing Data Removal from Trained Models MICCAI 2021 [Code] Model-Agnostic
Knowledge-Adaptation Priors NeurIPS 2021 [Code] Model-Agnostic
PrIU: A Provenance-Based Approach for Incrementally Updating Regression Models NeurIPS 2020 - Model-Agnostic
Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks CVPR 2020 - Model-Agnostic
Learn to Forget: User-Level Memorization Elimination in Federated Learning arXiv 2020 - Model-Agnostic
Certified Data Removal from Machine Learning Models ICML 2020 - Model-Agnostic
Class Clown: Data Redaction in Machine Unlearning at Enterprise Scale arXiv 2020 - Model-Agnostic
A Novel Online Incremental and Decremental Learning Algorithm Based on Variable Support Vector Machine Cluster Computing 2019 - Model-Agnostic
Making AI Forget You: Data Deletion in Machine Learning NeurIPS 2019 - Model-Agnostic
Lifelong Anomaly Detection Through Unlearning CCS 2019 - Model-Agnostic
Learning Not to Learn: Training Deep Neural Networks With Biased Data CVPR 2019 - Model-Agnostic
Efficient Repair of Polluted Machine Learning Systems via Causal Unlearning ASIACCS 2018 [Code] Model-Agnostic
Understanding Black-box Predictions via Influence Functions ICML 2017 [Code] Model-Agnostic
Towards Making Systems Forget with Machine Unlearning S&P 2015 - Model-Agnostic
Towards Making Systems Forget with Machine Unlearning S&P 2015 - Model-Agnostic
Incremental and decremental training for linear classification KDD 2014 [Code] Model-Agnostic
Multiple Incremental Decremental Learning of Support Vector Machines NIPS 2009 - Model-Agnostic
Incremental and Decremental Learning for Linear Support Vector Machines ICANN 2007 - Model-Agnostic
Decremental Learning Algorithms for Nonlinear Langrangian and Least Squares Support Vector Machines OSB 2007 - Model-Agnostic
Multicategory Incremental Proximal Support Vector Classifiers KES 2003 - Model-Agnostic
Incremental and Decremental Proximal Support Vector Classification using Decay Coefficients DaWak 2003 - Model-Agnostic
Incremental and Decremental Support Vector Machine Learning NeurIPS 2000 - Model-Agnostic
Machine Unlearning for Image-to-Image Generative Models arXiv 2024 [Code] Model-Intrinsic
Towards Efficient and Effective Unlearning of Large Language Models for Recommendation arXiv 2024 [Code] Model-Intrinsic
Dissecting Language Models: Machine Unlearning via Selective Pruning arXiv 2024 - Model-Intrinsic
Decentralized Federated Unlearning on Blockchain arXiv 2024 - Model-Intrinsic
Unlink to Unlearn: Simplifying Edge Unlearning in GNNs WWW 2024 [Code] Model-Intrinsic
Preserving Privacy Through Dememorization: An Unlearning Technique For Mitigating Memorization Risks In Language Models EMNLP 2024 - Model-Intrinsic
Towards Effective and General Graph Unlearning via Mutual Evolution AAAI 2024 - Model-Intrinsic
Separate the Wheat from the Chaff: Model Deficiency Unlearning via Parameter-Efficient Module Operation AAAI 2024 [Code] Model-Intrinsic
FAST: Feature Aware Similarity Thresholding for Weak Unlearning in Black-Box Generative Models arXiv 2023 [Code] Model-Intrinsic
Fast-NTK: Parameter-Efficient Unlearning for Large-Scale Models arXiv 2023 - Model-Intrinsic
Certified Minimax Unlearning with Generalization Rates and Deletion Capacity NeurIPS 2023 - Model-Intrinsic
FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs NeurIPS 2023 - Model-Intrinsic
Multimodal Machine Unlearning arXiv 2023 - Model-Intrinsic
Adapt then Unlearn: Exploiting Parameter Space Semantics for Unlearning in Generative Adversarial Networks arXiv 2023 - Model-Intrinsic
SAFE: Machine Unlearning With Shard Graphs ICCV 2023 - Model-Intrinsic
MUter: Machine Unlearning on Adversarially Trained Models ICCV 2023 - Model-Intrinsic
Heterogeneous Federated Knowledge Graph Embedding Learning and Unlearning WWW 2023 [Code] Model-Intrinsic
One-Shot Machine Unlearning with Mnemonic Code arXiv 2023 - Model-Intrinsic
Inductive Graph Unlearning USENIX 2023 [Code] Model-Intrinsic
ERM-KTP: Knowledge-level Machine Unlearning via Knowledge Transfer CVPR 2023 [Code] Model-Intrinsic
GNNDelete: A General Strategy for Unlearning in Graph Neural Networks ICLR 2023 [Code] Model-Intrinsic
Unfolded Self-Reconstruction LSH: Towards Machine Unlearning in Approximate Nearest Neighbour Search arXiv 2023 [Code] Model-Intrinsic
Efficiently Forgetting What You Have Learned in Graph Representation Learning via Projection AISTATS 2023 [Code] Model-Intrinsic
Unrolling SGD: Understanding Factors Influencing Machine Unlearning EuroS&P 2022 [Code] Model-Intrinsic
Graph Unlearning CCS 2022 [Code] Model-Intrinsic
Certified Graph Unlearning GLFrontiers Workshop 2022 [Code] Model-Intrinsic
Skin Deep Unlearning: Artefact and Instrument Debiasing in the Context of Melanoma Classification ICML 2022 [Code] Model-Intrinsic
Near-Optimal Task Selection for Meta-Learning with Mutual Information and Online Variational Bayesian Unlearning AISTATS 2022 - Model-Intrinsic
Unlearning Protected User Attributes in Recommendations with Adversarial Training SIGIR 2022 [Code] Model-Intrinsic
Recommendation Unlearning TheWebConf 2022 [Code] Model-Intrinsic
Knowledge Neurons in Pretrained Transformers ACL 2022 [Code] Model-Intrinsic
Memory-Based Model Editing at Scale MLR 2022 [Code] Model-Intrinsic
Forgetting Fast in Recommender Systems arXiv 2022 - Model-Intrinsic
Unlearning Nonlinear Graph Classifiers in the Limited Training Data Regime arXiv 2022 - Model-Intrinsic
Deep Regression Unlearning arXiv 2022 - Model-Intrinsic
Quark: Controllable Text Generation with Reinforced Unlearning arXiv 2022 [Code] Model-Intrinsic
Forget-SVGD: Particle-Based Bayesian Federated Unlearning DSL Workshop 2022 - Model-Intrinsic
Machine Unlearning of Federated Clusters arXiv 2022 - Model-Intrinsic
Machine Unlearning for Image Retrieval: A Generative Scrubbing Approach MM 2022 - Model-Intrinsic
Machine Unlearning: Linear Filtration for Logit-based Classifiers Machine Learning 2022 - Model-Intrinsic
Deep Unlearning via Randomized Conditionally Independent Hessians CVPR 2022 [Code] Model-Intrinsic
Challenges and Pitfalls of Bayesian Unlearning UPML Workshop 2022 - Model-Intrinsic
Federated Unlearning via Class-Discriminative Pruning WWW 2022 - Model-Intrinsic
Active forgetting via influence estimation for neural networks Int. J. Intel. Systems 2022 - Model-Intrinsic
Variational Bayesian unlearning NeurIPS 2022 - Model-Intrinsic
Revisiting Machine Learning Training Process for Enhanced Data Privacy IC3 2021 - Model-Intrinsic
Knowledge Removal in Sampling-based Bayesian Inference ICLR 2021 [Code] Model-Intrinsic
Mixed-Privacy Forgetting in Deep Networks CVPR 2021 - Model-Intrinsic
HedgeCut: Maintaining Randomised Trees for Low-Latency Machine Unlearning SIGMOD 2021 [Code] Model-Intrinsic
A Unified PAC-Bayesian Framework for Machine Unlearning via Information Risk Minimization MLSP 2021 - Model-Intrinsic
DeepObliviate: A Powerful Charm for Erasing Data Residual Memory in Deep Neural Networks arXiv 2021 - Model-Intrinsic
Approximate Data Deletion from Machine Learning Models: Algorithms and Evaluations AISTATS 2021 [Code] Model-Intrinsic
Bayesian Inference Forgetting arXiv 2021 [Code] Model-Intrinsic
Approximate Data Deletion from Machine Learning Models AISTATS 2021 [Code] Model-Intrinsic
Online Forgetting Process for Linear Regression Models AISTATS 2021 - Model-Intrinsic
RevFRF: Enabling Cross-domain Random Forest Training with Revocable Federated Learning IEEE 2021 - Model-Intrinsic
Coded Machine Unlearning IEEE Access 2021 - Model-Intrinsic
Machine Unlearning for Random Forests ICML 2021 - Model-Intrinsic
Bayesian Variational Federated Learning and Unlearning in Decentralized Networks SPAWC 2021 - Model-Intrinsic
Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-Output Observations ECCV 2020 - Model-Intrinsic
Influence Functions in Deep Learning Are Fragile arXiv 2020 - Model-Intrinsic
Deep Autoencoding Topic Model With Scalable Hybrid Bayesian Inference IEEE 2020 - Model-Intrinsic
Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks CVPR 2020 - Model-Intrinsic
Uncertainty in Neural Networks: Approximately Bayesian Ensembling AISTATS 2020 [Code] Model-Intrinsic
Certified Data Removal from Machine Learning Models ICML 2020 - Model-Intrinsic
DeltaGrad: Rapid retraining of machine learning models ICML 2020 [Code] Model-Intrinsic
Making AI Forget You: Data Deletion in Machine Learning NeurIPS 2019 - Model-Intrinsic
“Amnesia” – Towards Machine Learning Models That Can Forget User Data Very Fast AIDB Workshop 2019 [Code] Model-Intrinsic
A Novel Online Incremental and Decremental Learning Algorithm Based on Variable Support Vector Machine Cluster Computing 2019 - Model-Intrinsic
Neural Text Degeneration With Unlikelihood Training arXiv 2019 [Code] Model-Intrinsic
Bayesian Neural Networks with Weight Sharing Using Dirichlet Processes IEEE 2018 [Code] Model-Intrinsic
Towards Machine Unlearning Benchmarks: Forgetting the Personal Identities in Facial Recognition Systems arXiv 2023 [Code] Data-Driven
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks NeurIPS-TSRML 2022 [Code] Data-Driven
Forget Unlearning: Towards True Data Deletion in Machine Learning ICLR 2022 - Data-Driven
ARCANE: An Efficient Architecture for Exact Machine Unlearning IJCAI 2022 - Data-Driven
PUMA: Performance Unchanged Model Augmentation for Training Data Removal AAAI 2022 - Data-Driven
Certifiable Unlearning Pipelines for Logistic Regression: An Experimental Study MAKE 2022 [Code] Data-Driven
Zero-Shot Machine Unlearning arXiv 2022 - Data-Driven
GRAPHEDITOR: An Efficient Graph Representation Learning and Unlearning Approach - 2022 [Code] Data-Driven
Fast Model Update for IoT Traffic Anomaly Detection with Machine Unlearning IEEE IoT-J 2022 - Data-Driven
Learning to Refit for Convex Learning Problems arXiv 2021 - Data-Driven
Fast Yet Effective Machine Unlearning arXiv 2021 - Data-Driven
Learning with Selective Forgetting IJCAI 2021 - Data-Driven
SSSE: Efficiently Erasing Samples from Trained Machine Learning Models NeurIPS-PRIML 2021 - Data-Driven
How Does Data Augmentation Affect Privacy in Machine Learning? AAAI 2021 [Code] Data-Driven
Coded Machine Unlearning IEEE 2021 - Data-Driven
Machine Unlearning IEEE 2021 [Code] Data-Driven
How Does Data Augmentation Affect Privacy in Machine Learning? AAAI 2021 [Code] Data-Driven
Amnesiac Machine Learning AAAI 2021 [Code] Data-Driven
Unlearnable Examples: Making Personal Data Unexploitable ICLR 2021 [Code] Data-Driven
Descent-to-Delete: Gradient-Based Methods for Machine Unlearning ALT 2021 - Data-Driven
Fawkes: Protecting Privacy against Unauthorized Deep Learning Models USENIX Sec. Sym. 2020 [Code] Data-Driven
PrIU: A Provenance-Based Approach for Incrementally Updating Regression Models SIGMOD 2020 - Data-Driven
DeltaGrad: Rapid retraining of machine learning models ICML 2020 [Code] Data-Driven
III. Citations
Source: https://github.com/tamlhp/awesome-machine-unlearning
Paper:   https://arxiv.org/abs/2209.02299
© 2022 Machine Unlearning  
Flag Counter