A Survey of Machine Unlearning
Awesome Machine Unlearning
I. Introduction
Today, computer systems hold large amounts of personal data. Yet while such an abundance of data allows breakthroughs in artificial intelligence, and especially machine learning (ML), its existence can be a threat to user privacy, and it can weaken the bonds of trust between humans and AI. Recent regulations now require that, on request, private information about a user must be removed from both computer systems and from ML models, i.e. "the right to be forgotten"). While removing data from back-end databases should be straightforward, it is not sufficient in the AI context as ML models often `remember' the old data. Contemporary adversarial attacks on trained models have proven that we can learn whether an instance or an attribute belonged to the training data. This phenomenon calls for a new paradigm, namely machine unlearning, to make ML models forget about particular data. It turns out that recent works on machine unlearning have not been able to completely solve the problem due to the lack of common frameworks and resources. Therefore, this paper aspires to present a comprehensive examination of machine unlearning's concepts, scenarios, methods, and applications. Specifically, as a category collection of cutting-edge studies, the intention behind this article is to serve as a comprehensive resource for researchers and practitioners seeking an introduction to machine unlearning and its formulations, design criteria, removal requests, algorithms, and applications. In addition, we aim to highlight the key findings, current trends, and new research areas that have not yet featured the use of machine unlearning but could benefit greatly from it. We hope this survey serves as a valuable resource for ML researchers and those seeking to innovate privacy technologies.
II. List of Approaches (Sortable)
Total number of rows: 66
Title Venue Year Code Type
Towards Adversarial Evaluations for Inexact Machine Unlearning arXiv 2023 [Code] Model-Agnostic
KGA: A General Machine Unlearning Framework Based on Knowledge Gap Alignment arXiv 2023 [Code] Model-Agnostic
On the Trade-Off between Actionable Explanations and the Right to be Forgotten arXiv 2023 - Model-Agnostic
Towards Unbounded Machine Unlearning arXiv 2023 [Code] Model-Agnostic
Netflix and Forget: Efficient and Exact Machine Unlearning from Bi-linear Recommendations arXiv 2023 - Model-Agnostic
To Be Forgotten or To Be Fair: Unveiling Fairness Implications of Machine Unlearning Methods arXiv 2023 [Code] Model-Agnostic
Sequential Informed Federated Unlearning: Efficient and Provable Client Unlearning in Federated Optimization arXiv 2022 - Model-Agnostic
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks NeurIPS-TSRML 2022 [Code] Data-Driven
Certified Data Removal in Sum-Product Networks ICKG 2022 [Code] Model-Agnostic
Learning with Recoverable Forgetting ECCV 2022 - Model-Agnostic
Continual Learning and Private Unlearning CoLLAs 2022 [Code] Model-Agnostic
Verifiable and Provably Secure Machine Unlearning arXiv 2022 [Code] Model-Agnostic
VeriFi: Towards Verifiable Federated Unlearning arXiv 2022 - Model-Agnostic
FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information S&P 2022 - Model-Agnostic
Fast Yet Effective Machine Unlearning arXiv 2022 - Model-Agnostic
Membership Inference via Backdooring IJCAI 2022 [Code] Model-Agnostic
Forget Unlearning: Towards True Data-Deletion in Machine Learning ICLR 2022 - Model-Agnostic
Zero-Shot Machine Unlearning arXiv 2022 - Model-Agnostic
Efficient Attribute Unlearning: Towards Selective Removal of Input Attributes from Feature Representations arXiv 2022 - Model-Agnostic
Few-Shot Unlearning ICLR 2022 - Model-Agnostic
Federated Unlearning: How to Efficiently Erase a Client in FL? UpML Workshop 2022 - Model-Agnostic
Machine Unlearning Method Based On Projection Residual DSAA 2022 - Model-Agnostic
Hard to Forget: Poisoning Attacks on Certified Machine Unlearning AAAI 2022 [Code] Model-Agnostic
Athena: Probabilistic Verification of Machine Unlearning PoPETs 2022 - Model-Agnostic
FP2-MIA: A Membership Inference Attack Free of Posterior Probability in Machine Unlearning ProvSec 2022 - Model-Agnostic
Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning PETS 2022 - Model-Agnostic
Prompt Certified Machine Unlearning with Randomized Gradient Smoothing and Quantization NeurIPS 2022 - Model-Agnostic
The Right to be Forgotten in Federated Learning: An Efficient Realization with Rapid Retraining INFOCOM 2022 [Code] Model-Agnostic
Backdoor Defense with Machine Unlearning INFOCOM 2022 - Model-Agnostic
Markov Chain Monte Carlo-Based Machine Unlearning: Unlearning What Needs to be Forgotten ASIA CCS 2022 - Model-Agnostic
Federated Unlearning for On-Device Recommendation arXiv 2022 - Model-Agnostic
Can Bad Teaching Induce Forgetting? Unlearning in Deep Networks using an Incompetent Teacher arXiv 2022 - Model-Agnostic
Efficient Two-Stage Model Retraining for Machine Unlearning CVPR Workshop 2022 - Model-Agnostic
Learn to Forget: Machine Unlearning Via Neuron Masking IEEE 2021 - Model-Agnostic
Adaptive Machine Unlearning NeurIPS 2021 [Code] Model-Agnostic
Descent-to-Delete: Gradient-Based Methods for Machine Unlearning ALT 2021 - Model-Agnostic
Remember What You Want to Forget: Algorithms for Machine Unlearning NeurIPS 2021 - Model-Agnostic
FedEraser: Enabling Efficient Client-Level Data Removal from Federated Learning Models IWQoS 2021 - Model-Agnostic
Federated Unlearning IWQoS 2021 [Code] Model-Agnostic
Machine Unlearning via Algorithmic Stability COLT 2021 - Model-Agnostic
EMA: Auditing Data Removal from Trained Models MICCAI 2021 [Code] Model-Agnostic
Knowledge-Adaptation Priors NeurIPS 2021 [Code] Model-Agnostic
PrIU: A Provenance-Based Approach for Incrementally Updating Regression Models NeurIPS 2020 - Model-Agnostic
Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks CVPR 2020 - Model-Agnostic
Learn to Forget: User-Level Memorization Elimination in Federated Learning arXiv 2020 - Model-Agnostic
Certified Data Removal from Machine Learning Models ICML 2020 - Model-Agnostic
Class Clown: Data Redaction in Machine Unlearning at Enterprise Scale arXiv 2020 - Model-Agnostic
A Novel Online Incremental and Decremental Learning Algorithm Based on Variable Support Vector Machine Cluster Computing 2019 - Model-Agnostic
Making AI Forget You: Data Deletion in Machine Learning NeurIPS 2019 - Model-Agnostic
Lifelong Anomaly Detection Through Unlearning CCS 2019 - Model-Agnostic
Learning Not to Learn: Training Deep Neural Networks With Biased Data CVPR 2019 - Model-Agnostic
Efficient Repair of Polluted Machine Learning Systems via Causal Unlearning ASIACCS 2018 [Code] Model-Agnostic
Understanding Black-box Predictions via Influence Functions ICML 2017 [Code] Model-Agnostic
Towards Making Systems Forget with Machine Unlearning S&P 2015 - Model-Agnostic
Towards Making Systems Forget with Machine Unlearning S&P 2015 - Model-Agnostic
Incremental and decremental training for linear classification KDD 2014 [Code] Model-Agnostic
Multiple Incremental Decremental Learning of Support Vector Machines NIPS 2009 - Model-Agnostic
Incremental and Decremental Learning for Linear Support Vector Machines ICANN 2007 - Model-Agnostic
Decremental Learning Algorithms for Nonlinear Langrangian and Least Squares Support Vector Machines OSB 2007 - Model-Agnostic
Multicategory Incremental Proximal Support Vector Classifiers KES 2003 - Model-Agnostic
Incremental and Decremental Proximal Support Vector Classification using Decay Coefficients DaWak 2003 - Model-Agnostic
Incremental and Decremental Support Vector Machine Learning NeurIPS 2000 - Model-Agnostic
Heterogeneous Federated Knowledge Graph Embedding Learning and Unlearning WWW 2023 [Code] Model-Intrinsic
One-Shot Machine Unlearning with Mnemonic Code arXiv 2023 - Model-Intrinsic
Inductive Graph Unlearning USENIX 2023 [Code] Model-Intrinsic
ERM-KTP: Knowledge-level Machine Unlearning via Knowledge Transfer CVPR 2023 [Code] Model-Intrinsic
GNNDelete: A General Strategy for Unlearning in Graph Neural Networks ICLR 2023 [Code] Model-Intrinsic
Unfolded Self-Reconstruction LSH: Towards Machine Unlearning in Approximate Nearest Neighbour Search arXiv 2023 [Code] Model-Intrinsic
Efficiently Forgetting What You Have Learned in Graph Representation Learning via Projection AISTATS 2023 [Code] Model-Intrinsic
Unrolling SGD: Understanding Factors Influencing Machine Unlearning EuroS&P 2022 [Code] Model-Intrinsic
Graph Unlearning CCS 2022 [Code] Model-Intrinsic
Certified Graph Unlearning GLFrontiers Workshop 2022 [Code] Model-Intrinsic
Skin Deep Unlearning: Artefact and Instrument Debiasing in the Context of Melanoma Classification ICML 2022 [Code] Model-Intrinsic
Near-Optimal Task Selection for Meta-Learning with Mutual Information and Online Variational Bayesian Unlearning AISTATS 2022 - Model-Intrinsic
Unlearning Protected User Attributes in Recommendations with Adversarial Training SIGIR 2022 [Code] Model-Intrinsic
Recommendation Unlearning TheWebConf 2022 [Code] Model-Intrinsic
Knowledge Neurons in Pretrained Transformers ACL 2022 [Code] Model-Intrinsic
Memory-Based Model Editing at Scale MLR 2022 [Code] Model-Intrinsic
Forgetting Fast in Recommender Systems arXiv 2022 - Model-Intrinsic
Unlearning Nonlinear Graph Classifiers in the Limited Training Data Regime arXiv 2022 - Model-Intrinsic
Deep Regression Unlearning arXiv 2022 - Model-Intrinsic
Quark: Controllable Text Generation with Reinforced Unlearning arXiv 2022 [Code] Model-Intrinsic
Forget-SVGD: Particle-Based Bayesian Federated Unlearning DSL Workshop 2022 - Model-Intrinsic
Machine Unlearning of Federated Clusters arXiv 2022 - Model-Intrinsic
Machine Unlearning for Image Retrieval: A Generative Scrubbing Approach MM 2022 - Model-Intrinsic
Machine Unlearning: Linear Filtration for Logit-based Classifiers Machine Learning 2022 - Model-Intrinsic
Deep Unlearning via Randomized Conditionally Independent Hessians CVPR 2022 [Code] Model-Intrinsic
Challenges and Pitfalls of Bayesian Unlearning UPML Workshop 2022 - Model-Intrinsic
Federated Unlearning via Class-Discriminative Pruning WWW 2022 - Model-Intrinsic
Active forgetting via influence estimation for neural networks Int. J. Intel. Systems 2022 - Model-Intrinsic
Variational Bayesian unlearning NeurIPS 2022 - Model-Intrinsic
Revisiting Machine Learning Training Process for Enhanced Data Privacy IC3 2021 - Model-Intrinsic
Knowledge Removal in Sampling-based Bayesian Inference ICLR 2021 [Code] Model-Intrinsic
Mixed-Privacy Forgetting in Deep Networks CVPR 2021 - Model-Intrinsic
HedgeCut: Maintaining Randomised Trees for Low-Latency Machine Unlearning SIGMOD 2021 [Code] Model-Intrinsic
A Unified PAC-Bayesian Framework for Machine Unlearning via Information Risk Minimization MLSP 2021 - Model-Intrinsic
DeepObliviate: A Powerful Charm for Erasing Data Residual Memory in Deep Neural Networks arXiv 2021 - Model-Intrinsic
Approximate Data Deletion from Machine Learning Models: Algorithms and Evaluations AISTATS 2021 [Code] Model-Intrinsic
Bayesian Inference Forgetting arXiv 2021 [Code] Model-Intrinsic
Approximate Data Deletion from Machine Learning Models AISTATS 2021 [Code] Model-Intrinsic
Online Forgetting Process for Linear Regression Models AISTATS 2021 - Model-Intrinsic
RevFRF: Enabling Cross-domain Random Forest Training with Revocable Federated Learning IEEE 2021 - Model-Intrinsic
Coded Machine Unlearning IEEE Access 2021 - Model-Intrinsic
Machine Unlearning for Random Forests ICML 2021 - Model-Intrinsic
Bayesian Variational Federated Learning and Unlearning in Decentralized Networks SPAWC 2021 - Model-Intrinsic
Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-Output Observations ECCV 2020 - Model-Intrinsic
Influence Functions in Deep Learning Are Fragile arXiv 2020 - Model-Intrinsic
Deep Autoencoding Topic Model With Scalable Hybrid Bayesian Inference IEEE 2020 - Model-Intrinsic
Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks CVPR 2020 - Model-Intrinsic
Uncertainty in Neural Networks: Approximately Bayesian Ensembling AISTATS 2020 [Code] Model-Intrinsic
Certified Data Removal from Machine Learning Models ICML 2020 - Model-Intrinsic
DeltaGrad: Rapid retraining of machine learning models ICML 2020 [Code] Model-Intrinsic
Making AI Forget You: Data Deletion in Machine Learning NeurIPS 2019 - Model-Intrinsic
“Amnesia” – Towards Machine Learning Models That Can Forget User Data Very Fast AIDB Workshop 2019 [Code] Model-Intrinsic
A Novel Online Incremental and Decremental Learning Algorithm Based on Variable Support Vector Machine Cluster Computing 2019 - Model-Intrinsic
Neural Text Degeneration With Unlikelihood Training arXiv 2019 [Code] Model-Intrinsic
Bayesian Neural Networks with Weight Sharing Using Dirichlet Processes IEEE 2018 [Code] Model-Intrinsic
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks NeurIPS-TSRML 2022 [Code] Data-Driven
Forget Unlearning: Towards True Data Deletion in Machine Learning ICLR 2022 - Data-Driven
ARCANE: An Efficient Architecture for Exact Machine Unlearning IJCAI 2022 - Data-Driven
PUMA: Performance Unchanged Model Augmentation for Training Data Removal AAAI 2022 - Data-Driven
Certifiable Unlearning Pipelines for Logistic Regression: An Experimental Study MAKE 2022 [Code] Data-Driven
Zero-Shot Machine Unlearning arXiv 2022 - Data-Driven
GRAPHEDITOR: An Efficient Graph Representation Learning and Unlearning Approach - 2022 [Code] Data-Driven
Fast Model Update for IoT Traffic Anomaly Detection with Machine Unlearning IEEE IoT-J 2022 - Data-Driven
Learning to Refit for Convex Learning Problems arXiv 2021 - Data-Driven
Fast Yet Effective Machine Unlearning arXiv 2021 - Data-Driven
Learning with Selective Forgetting IJCAI 2021 - Data-Driven
SSSE: Efficiently Erasing Samples from Trained Machine Learning Models NeurIPS-PRIML 2021 - Data-Driven
How Does Data Augmentation Affect Privacy in Machine Learning? AAAI 2021 [Code] Data-Driven
Coded Machine Unlearning IEEE 2021 - Data-Driven
Machine Unlearning IEEE 2021 [Code] Data-Driven
How Does Data Augmentation Affect Privacy in Machine Learning? AAAI 2021 [Code] Data-Driven
Amnesiac Machine Learning AAAI 2021 [Code] Data-Driven
Unlearnable Examples: Making Personal Data Unexploitable ICLR 2021 [Code] Data-Driven
Descent-to-Delete: Gradient-Based Methods for Machine Unlearning ALT 2021 - Data-Driven
Fawkes: Protecting Privacy against Unauthorized Deep Learning Models USENIX Sec. Sym. 2020 [Code] Data-Driven
PrIU: A Provenance-Based Approach for Incrementally Updating Regression Models SIGMOD 2020 - Data-Driven
DeltaGrad: Rapid retraining of machine learning models ICML 2020 [Code] Data-Driven
III. Citations
Source: https://github.com/tamlhp/awesome-machine-unlearning
Paper:   https://arxiv.org/abs/2209.02299
© 2022 Machine Unlearning